Test Report: KVM_Linux_crio 18925

                    
                      9bd6871c0608907332c6bb982838c8ee113ad42f:2024-05-20:34544
                    
                

Test fail (32/313)

Order failed test Duration
30 TestAddons/parallel/Ingress 154.65
32 TestAddons/parallel/MetricsServer 340.43
44 TestAddons/StoppedEnableDisable 154.31
55 TestErrorSpam/setup 42.67
163 TestMultiControlPlane/serial/StopSecondaryNode 141.99
165 TestMultiControlPlane/serial/RestartSecondaryNode 53.56
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 396.97
168 TestMultiControlPlane/serial/DeleteSecondaryNode 19.46
170 TestMultiControlPlane/serial/StopCluster 172.4
230 TestMultiNode/serial/RestartKeepsNodes 304.31
232 TestMultiNode/serial/StopMultiNode 141.35
239 TestPreload 350.42
247 TestKubernetesUpgrade 417.53
260 TestPause/serial/SecondStartNoReconfiguration 378.62
284 TestStartStop/group/old-k8s-version/serial/FirstStart 269.44
289 TestStartStop/group/no-preload/serial/Stop 139.04
296 TestStartStop/group/embed-certs/serial/Stop 138.96
297 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
299 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 83.38
303 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.09
304 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
307 TestStartStop/group/old-k8s-version/serial/SecondStart 699.81
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.28
312 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.23
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.3
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.45
315 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 538.54
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.84
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 322.11
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 203.42
x
+
TestAddons/parallel/Ingress (154.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-807933 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-807933 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-807933 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1c7de491-1690-49fe-8238-f9e1b0120d62] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1c7de491-1690-49fe-8238-f9e1b0120d62] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005011947s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-807933 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.713079108s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-807933 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.86
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-807933 addons disable ingress-dns --alsologtostderr -v=1: (1.038329099s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-807933 addons disable ingress --alsologtostderr -v=1: (7.785164735s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-807933 -n addons-807933
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-807933 logs -n 25: (1.309862893s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-824408 | jenkins | v1.33.1 | 20 May 24 10:21 UTC |                     |
	|         | -p download-only-824408                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:21 UTC |
	| delete  | -p download-only-824408                                                                     | download-only-824408 | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:21 UTC |
	| delete  | -p download-only-865403                                                                     | download-only-865403 | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:21 UTC |
	| delete  | -p download-only-824408                                                                     | download-only-824408 | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-680660 | jenkins | v1.33.1 | 20 May 24 10:21 UTC |                     |
	|         | binary-mirror-680660                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45873                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-680660                                                                     | binary-mirror-680660 | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:21 UTC |                     |
	|         | addons-807933                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:21 UTC |                     |
	|         | addons-807933                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-807933 --wait=true                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:24 UTC | 20 May 24 10:24 UTC |
	|         | addons-807933                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:24 UTC | 20 May 24 10:25 UTC |
	|         | -p addons-807933                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | -p addons-807933                                                                            |                      |         |         |                     |                     |
	| ip      | addons-807933 ip                                                                            | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| addons  | addons-807933 addons disable                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-807933 addons disable                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | addons-807933                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-807933 ssh curl -s                                                                   | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-807933 ssh cat                                                                       | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | /opt/local-path-provisioner/pvc-c2976ab9-5fd0-4fd1-8996-109c08730171_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-807933 addons disable                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-807933 addons                                                                        | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-807933 addons                                                                        | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-807933 ip                                                                            | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:27 UTC | 20 May 24 10:27 UTC |
	| addons  | addons-807933 addons disable                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:27 UTC | 20 May 24 10:27 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-807933 addons disable                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:27 UTC | 20 May 24 10:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:21:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:21:36.901543   12352 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:21:36.901773   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:21:36.901781   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:21:36.901785   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:21:36.901985   12352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:21:36.902556   12352 out.go:298] Setting JSON to false
	I0520 10:21:36.903332   12352 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":240,"bootTime":1716200257,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 10:21:36.903389   12352 start.go:139] virtualization: kvm guest
	I0520 10:21:36.905525   12352 out.go:177] * [addons-807933] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 10:21:36.907151   12352 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:21:36.908506   12352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:21:36.907133   12352 notify.go:220] Checking for updates...
	I0520 10:21:36.911067   12352 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:21:36.912327   12352 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:21:36.913462   12352 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 10:21:36.914680   12352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:21:36.916063   12352 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:21:36.947276   12352 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 10:21:36.948582   12352 start.go:297] selected driver: kvm2
	I0520 10:21:36.948598   12352 start.go:901] validating driver "kvm2" against <nil>
	I0520 10:21:36.948624   12352 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:21:36.949330   12352 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:21:36.949409   12352 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 10:21:36.963367   12352 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 10:21:36.963409   12352 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:21:36.963609   12352 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:21:36.963634   12352 cni.go:84] Creating CNI manager for ""
	I0520 10:21:36.963641   12352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 10:21:36.963650   12352 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 10:21:36.963692   12352 start.go:340] cluster config:
	{Name:addons-807933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-807933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:21:36.963773   12352 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:21:36.965537   12352 out.go:177] * Starting "addons-807933" primary control-plane node in "addons-807933" cluster
	I0520 10:21:36.966826   12352 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:21:36.966861   12352 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 10:21:36.966871   12352 cache.go:56] Caching tarball of preloaded images
	I0520 10:21:36.966962   12352 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 10:21:36.966977   12352 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:21:36.967398   12352 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/config.json ...
	I0520 10:21:36.967424   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/config.json: {Name:mk89a6bd9d43695cbac9dcfd1e40eefc7cccf357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:21:36.967568   12352 start.go:360] acquireMachinesLock for addons-807933: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 10:21:36.967614   12352 start.go:364] duration metric: took 32.236µs to acquireMachinesLock for "addons-807933"
	I0520 10:21:36.967631   12352 start.go:93] Provisioning new machine with config: &{Name:addons-807933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-807933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:21:36.967720   12352 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 10:21:36.969275   12352 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 10:21:36.969385   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:21:36.969426   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:21:36.982852   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0520 10:21:36.983252   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:21:36.983741   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:21:36.983761   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:21:36.984071   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:21:36.984234   12352 main.go:141] libmachine: (addons-807933) Calling .GetMachineName
	I0520 10:21:36.984394   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:21:36.984512   12352 start.go:159] libmachine.API.Create for "addons-807933" (driver="kvm2")
	I0520 10:21:36.984541   12352 client.go:168] LocalClient.Create starting
	I0520 10:21:36.984586   12352 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 10:21:37.161005   12352 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 10:21:37.333200   12352 main.go:141] libmachine: Running pre-create checks...
	I0520 10:21:37.333224   12352 main.go:141] libmachine: (addons-807933) Calling .PreCreateCheck
	I0520 10:21:37.333721   12352 main.go:141] libmachine: (addons-807933) Calling .GetConfigRaw
	I0520 10:21:37.334141   12352 main.go:141] libmachine: Creating machine...
	I0520 10:21:37.334155   12352 main.go:141] libmachine: (addons-807933) Calling .Create
	I0520 10:21:37.334293   12352 main.go:141] libmachine: (addons-807933) Creating KVM machine...
	I0520 10:21:37.335577   12352 main.go:141] libmachine: (addons-807933) DBG | found existing default KVM network
	I0520 10:21:37.336287   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:37.336159   12374 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0520 10:21:37.336308   12352 main.go:141] libmachine: (addons-807933) DBG | created network xml: 
	I0520 10:21:37.336321   12352 main.go:141] libmachine: (addons-807933) DBG | <network>
	I0520 10:21:37.336335   12352 main.go:141] libmachine: (addons-807933) DBG |   <name>mk-addons-807933</name>
	I0520 10:21:37.336353   12352 main.go:141] libmachine: (addons-807933) DBG |   <dns enable='no'/>
	I0520 10:21:37.336365   12352 main.go:141] libmachine: (addons-807933) DBG |   
	I0520 10:21:37.336404   12352 main.go:141] libmachine: (addons-807933) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 10:21:37.336424   12352 main.go:141] libmachine: (addons-807933) DBG |     <dhcp>
	I0520 10:21:37.336432   12352 main.go:141] libmachine: (addons-807933) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 10:21:37.336441   12352 main.go:141] libmachine: (addons-807933) DBG |     </dhcp>
	I0520 10:21:37.336447   12352 main.go:141] libmachine: (addons-807933) DBG |   </ip>
	I0520 10:21:37.336452   12352 main.go:141] libmachine: (addons-807933) DBG |   
	I0520 10:21:37.336458   12352 main.go:141] libmachine: (addons-807933) DBG | </network>
	I0520 10:21:37.336466   12352 main.go:141] libmachine: (addons-807933) DBG | 
	I0520 10:21:37.341935   12352 main.go:141] libmachine: (addons-807933) DBG | trying to create private KVM network mk-addons-807933 192.168.39.0/24...
	I0520 10:21:37.403821   12352 main.go:141] libmachine: (addons-807933) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933 ...
	I0520 10:21:37.403854   12352 main.go:141] libmachine: (addons-807933) DBG | private KVM network mk-addons-807933 192.168.39.0/24 created
	I0520 10:21:37.403867   12352 main.go:141] libmachine: (addons-807933) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 10:21:37.403885   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:37.403748   12374 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:21:37.403939   12352 main.go:141] libmachine: (addons-807933) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 10:21:37.665472   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:37.665383   12374 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa...
	I0520 10:21:37.748524   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:37.748399   12374 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/addons-807933.rawdisk...
	I0520 10:21:37.748551   12352 main.go:141] libmachine: (addons-807933) DBG | Writing magic tar header
	I0520 10:21:37.748561   12352 main.go:141] libmachine: (addons-807933) DBG | Writing SSH key tar header
	I0520 10:21:37.748572   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:37.748540   12374 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933 ...
	I0520 10:21:37.748669   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933
	I0520 10:21:37.748690   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 10:21:37.748703   12352 main.go:141] libmachine: (addons-807933) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933 (perms=drwx------)
	I0520 10:21:37.748716   12352 main.go:141] libmachine: (addons-807933) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 10:21:37.748726   12352 main.go:141] libmachine: (addons-807933) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 10:21:37.748741   12352 main.go:141] libmachine: (addons-807933) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 10:21:37.748752   12352 main.go:141] libmachine: (addons-807933) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 10:21:37.748766   12352 main.go:141] libmachine: (addons-807933) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 10:21:37.748782   12352 main.go:141] libmachine: (addons-807933) Creating domain...
	I0520 10:21:37.748792   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:21:37.748805   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 10:21:37.748816   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 10:21:37.748827   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home/jenkins
	I0520 10:21:37.748839   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home
	I0520 10:21:37.748866   12352 main.go:141] libmachine: (addons-807933) DBG | Skipping /home - not owner
	I0520 10:21:37.749721   12352 main.go:141] libmachine: (addons-807933) define libvirt domain using xml: 
	I0520 10:21:37.749733   12352 main.go:141] libmachine: (addons-807933) <domain type='kvm'>
	I0520 10:21:37.749742   12352 main.go:141] libmachine: (addons-807933)   <name>addons-807933</name>
	I0520 10:21:37.749755   12352 main.go:141] libmachine: (addons-807933)   <memory unit='MiB'>4000</memory>
	I0520 10:21:37.749764   12352 main.go:141] libmachine: (addons-807933)   <vcpu>2</vcpu>
	I0520 10:21:37.749778   12352 main.go:141] libmachine: (addons-807933)   <features>
	I0520 10:21:37.749788   12352 main.go:141] libmachine: (addons-807933)     <acpi/>
	I0520 10:21:37.749797   12352 main.go:141] libmachine: (addons-807933)     <apic/>
	I0520 10:21:37.749805   12352 main.go:141] libmachine: (addons-807933)     <pae/>
	I0520 10:21:37.749814   12352 main.go:141] libmachine: (addons-807933)     
	I0520 10:21:37.749819   12352 main.go:141] libmachine: (addons-807933)   </features>
	I0520 10:21:37.749824   12352 main.go:141] libmachine: (addons-807933)   <cpu mode='host-passthrough'>
	I0520 10:21:37.749833   12352 main.go:141] libmachine: (addons-807933)   
	I0520 10:21:37.749846   12352 main.go:141] libmachine: (addons-807933)   </cpu>
	I0520 10:21:37.749854   12352 main.go:141] libmachine: (addons-807933)   <os>
	I0520 10:21:37.749861   12352 main.go:141] libmachine: (addons-807933)     <type>hvm</type>
	I0520 10:21:37.749908   12352 main.go:141] libmachine: (addons-807933)     <boot dev='cdrom'/>
	I0520 10:21:37.749932   12352 main.go:141] libmachine: (addons-807933)     <boot dev='hd'/>
	I0520 10:21:37.749956   12352 main.go:141] libmachine: (addons-807933)     <bootmenu enable='no'/>
	I0520 10:21:37.749977   12352 main.go:141] libmachine: (addons-807933)   </os>
	I0520 10:21:37.749990   12352 main.go:141] libmachine: (addons-807933)   <devices>
	I0520 10:21:37.750006   12352 main.go:141] libmachine: (addons-807933)     <disk type='file' device='cdrom'>
	I0520 10:21:37.750028   12352 main.go:141] libmachine: (addons-807933)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/boot2docker.iso'/>
	I0520 10:21:37.750040   12352 main.go:141] libmachine: (addons-807933)       <target dev='hdc' bus='scsi'/>
	I0520 10:21:37.750050   12352 main.go:141] libmachine: (addons-807933)       <readonly/>
	I0520 10:21:37.750062   12352 main.go:141] libmachine: (addons-807933)     </disk>
	I0520 10:21:37.750072   12352 main.go:141] libmachine: (addons-807933)     <disk type='file' device='disk'>
	I0520 10:21:37.750081   12352 main.go:141] libmachine: (addons-807933)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 10:21:37.750089   12352 main.go:141] libmachine: (addons-807933)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/addons-807933.rawdisk'/>
	I0520 10:21:37.750097   12352 main.go:141] libmachine: (addons-807933)       <target dev='hda' bus='virtio'/>
	I0520 10:21:37.750102   12352 main.go:141] libmachine: (addons-807933)     </disk>
	I0520 10:21:37.750109   12352 main.go:141] libmachine: (addons-807933)     <interface type='network'>
	I0520 10:21:37.750115   12352 main.go:141] libmachine: (addons-807933)       <source network='mk-addons-807933'/>
	I0520 10:21:37.750122   12352 main.go:141] libmachine: (addons-807933)       <model type='virtio'/>
	I0520 10:21:37.750133   12352 main.go:141] libmachine: (addons-807933)     </interface>
	I0520 10:21:37.750141   12352 main.go:141] libmachine: (addons-807933)     <interface type='network'>
	I0520 10:21:37.750148   12352 main.go:141] libmachine: (addons-807933)       <source network='default'/>
	I0520 10:21:37.750152   12352 main.go:141] libmachine: (addons-807933)       <model type='virtio'/>
	I0520 10:21:37.750159   12352 main.go:141] libmachine: (addons-807933)     </interface>
	I0520 10:21:37.750164   12352 main.go:141] libmachine: (addons-807933)     <serial type='pty'>
	I0520 10:21:37.750172   12352 main.go:141] libmachine: (addons-807933)       <target port='0'/>
	I0520 10:21:37.750176   12352 main.go:141] libmachine: (addons-807933)     </serial>
	I0520 10:21:37.750184   12352 main.go:141] libmachine: (addons-807933)     <console type='pty'>
	I0520 10:21:37.750189   12352 main.go:141] libmachine: (addons-807933)       <target type='serial' port='0'/>
	I0520 10:21:37.750197   12352 main.go:141] libmachine: (addons-807933)     </console>
	I0520 10:21:37.750202   12352 main.go:141] libmachine: (addons-807933)     <rng model='virtio'>
	I0520 10:21:37.750211   12352 main.go:141] libmachine: (addons-807933)       <backend model='random'>/dev/random</backend>
	I0520 10:21:37.750218   12352 main.go:141] libmachine: (addons-807933)     </rng>
	I0520 10:21:37.750223   12352 main.go:141] libmachine: (addons-807933)     
	I0520 10:21:37.750229   12352 main.go:141] libmachine: (addons-807933)     
	I0520 10:21:37.750235   12352 main.go:141] libmachine: (addons-807933)   </devices>
	I0520 10:21:37.750241   12352 main.go:141] libmachine: (addons-807933) </domain>
	I0520 10:21:37.750248   12352 main.go:141] libmachine: (addons-807933) 
	I0520 10:21:37.755765   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:7b:b6:89 in network default
	I0520 10:21:37.756304   12352 main.go:141] libmachine: (addons-807933) Ensuring networks are active...
	I0520 10:21:37.756324   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:37.756900   12352 main.go:141] libmachine: (addons-807933) Ensuring network default is active
	I0520 10:21:37.757195   12352 main.go:141] libmachine: (addons-807933) Ensuring network mk-addons-807933 is active
	I0520 10:21:37.757664   12352 main.go:141] libmachine: (addons-807933) Getting domain xml...
	I0520 10:21:37.758263   12352 main.go:141] libmachine: (addons-807933) Creating domain...
	I0520 10:21:39.114451   12352 main.go:141] libmachine: (addons-807933) Waiting to get IP...
	I0520 10:21:39.115120   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:39.115504   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:39.115556   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:39.115494   12374 retry.go:31] will retry after 207.51322ms: waiting for machine to come up
	I0520 10:21:39.325016   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:39.325418   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:39.325464   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:39.325412   12374 retry.go:31] will retry after 290.80023ms: waiting for machine to come up
	I0520 10:21:39.617921   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:39.618379   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:39.618401   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:39.618362   12374 retry.go:31] will retry after 329.008994ms: waiting for machine to come up
	I0520 10:21:39.949057   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:39.949514   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:39.949543   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:39.949455   12374 retry.go:31] will retry after 519.550951ms: waiting for machine to come up
	I0520 10:21:40.470072   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:40.470752   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:40.470801   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:40.470688   12374 retry.go:31] will retry after 652.239224ms: waiting for machine to come up
	I0520 10:21:41.124533   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:41.124997   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:41.125025   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:41.124919   12374 retry.go:31] will retry after 674.149446ms: waiting for machine to come up
	I0520 10:21:41.800868   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:41.801371   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:41.801392   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:41.801338   12374 retry.go:31] will retry after 1.15805065s: waiting for machine to come up
	I0520 10:21:42.960475   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:42.960840   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:42.960874   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:42.960831   12374 retry.go:31] will retry after 1.145933963s: waiting for machine to come up
	I0520 10:21:44.108108   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:44.108501   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:44.108529   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:44.108461   12374 retry.go:31] will retry after 1.384710006s: waiting for machine to come up
	I0520 10:21:45.495028   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:45.495367   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:45.495381   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:45.495335   12374 retry.go:31] will retry after 1.836113332s: waiting for machine to come up
	I0520 10:21:47.333316   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:47.333741   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:47.333775   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:47.333665   12374 retry.go:31] will retry after 2.54852114s: waiting for machine to come up
	I0520 10:21:49.883302   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:49.883637   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:49.883659   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:49.883594   12374 retry.go:31] will retry after 2.68971717s: waiting for machine to come up
	I0520 10:21:52.574915   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:52.575294   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:52.575322   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:52.575261   12374 retry.go:31] will retry after 3.09017513s: waiting for machine to come up
	I0520 10:21:55.669446   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:55.669871   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:55.669899   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:55.669813   12374 retry.go:31] will retry after 4.957723981s: waiting for machine to come up
	I0520 10:22:00.629996   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.630423   12352 main.go:141] libmachine: (addons-807933) Found IP for machine: 192.168.39.86
	I0520 10:22:00.630438   12352 main.go:141] libmachine: (addons-807933) Reserving static IP address...
	I0520 10:22:00.630463   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has current primary IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.630857   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find host DHCP lease matching {name: "addons-807933", mac: "52:54:00:ab:70:6f", ip: "192.168.39.86"} in network mk-addons-807933
	I0520 10:22:00.698284   12352 main.go:141] libmachine: (addons-807933) DBG | Getting to WaitForSSH function...
	I0520 10:22:00.698315   12352 main.go:141] libmachine: (addons-807933) Reserved static IP address: 192.168.39.86
	I0520 10:22:00.698329   12352 main.go:141] libmachine: (addons-807933) Waiting for SSH to be available...
	I0520 10:22:00.700476   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.700791   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:00.700823   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.700980   12352 main.go:141] libmachine: (addons-807933) DBG | Using SSH client type: external
	I0520 10:22:00.701010   12352 main.go:141] libmachine: (addons-807933) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa (-rw-------)
	I0520 10:22:00.701050   12352 main.go:141] libmachine: (addons-807933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 10:22:00.701076   12352 main.go:141] libmachine: (addons-807933) DBG | About to run SSH command:
	I0520 10:22:00.701090   12352 main.go:141] libmachine: (addons-807933) DBG | exit 0
	I0520 10:22:00.833271   12352 main.go:141] libmachine: (addons-807933) DBG | SSH cmd err, output: <nil>: 
	I0520 10:22:00.833552   12352 main.go:141] libmachine: (addons-807933) KVM machine creation complete!
	I0520 10:22:00.833867   12352 main.go:141] libmachine: (addons-807933) Calling .GetConfigRaw
	I0520 10:22:00.834386   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:00.834581   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:00.834739   12352 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 10:22:00.834752   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:00.835872   12352 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 10:22:00.835886   12352 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 10:22:00.835891   12352 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 10:22:00.835896   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:00.838191   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.838537   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:00.838565   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.838706   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:00.838880   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:00.839030   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:00.839137   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:00.839269   12352 main.go:141] libmachine: Using SSH client type: native
	I0520 10:22:00.839509   12352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0520 10:22:00.839523   12352 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 10:22:00.944029   12352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:22:00.944068   12352 main.go:141] libmachine: Detecting the provisioner...
	I0520 10:22:00.944079   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:00.946464   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.946849   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:00.946873   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.946986   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:00.947178   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:00.947337   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:00.947491   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:00.947682   12352 main.go:141] libmachine: Using SSH client type: native
	I0520 10:22:00.947888   12352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0520 10:22:00.947904   12352 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 10:22:01.053668   12352 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 10:22:01.053784   12352 main.go:141] libmachine: found compatible host: buildroot
	I0520 10:22:01.053808   12352 main.go:141] libmachine: Provisioning with buildroot...
	I0520 10:22:01.053819   12352 main.go:141] libmachine: (addons-807933) Calling .GetMachineName
	I0520 10:22:01.054066   12352 buildroot.go:166] provisioning hostname "addons-807933"
	I0520 10:22:01.054090   12352 main.go:141] libmachine: (addons-807933) Calling .GetMachineName
	I0520 10:22:01.054300   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:01.056504   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.056829   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.056854   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.056979   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:01.057135   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.057280   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.057549   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:01.057706   12352 main.go:141] libmachine: Using SSH client type: native
	I0520 10:22:01.057867   12352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0520 10:22:01.057879   12352 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-807933 && echo "addons-807933" | sudo tee /etc/hostname
	I0520 10:22:01.179301   12352 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-807933
	
	I0520 10:22:01.179322   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:01.182868   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.183240   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.183268   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.183565   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:01.183732   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.183888   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.184023   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:01.184181   12352 main.go:141] libmachine: Using SSH client type: native
	I0520 10:22:01.184384   12352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0520 10:22:01.184410   12352 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-807933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-807933/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-807933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:22:01.301764   12352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:22:01.301799   12352 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 10:22:01.301826   12352 buildroot.go:174] setting up certificates
	I0520 10:22:01.301837   12352 provision.go:84] configureAuth start
	I0520 10:22:01.301845   12352 main.go:141] libmachine: (addons-807933) Calling .GetMachineName
	I0520 10:22:01.302115   12352 main.go:141] libmachine: (addons-807933) Calling .GetIP
	I0520 10:22:01.304657   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.305011   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.305045   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.305193   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:01.307005   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.307297   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.307328   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.307423   12352 provision.go:143] copyHostCerts
	I0520 10:22:01.307499   12352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 10:22:01.307624   12352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 10:22:01.307700   12352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 10:22:01.307763   12352 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.addons-807933 san=[127.0.0.1 192.168.39.86 addons-807933 localhost minikube]
	I0520 10:22:01.485009   12352 provision.go:177] copyRemoteCerts
	I0520 10:22:01.485065   12352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:22:01.485087   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:01.487300   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.487569   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.487600   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.487727   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:01.487879   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.488020   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:01.488178   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:01.571179   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:22:01.595793   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 10:22:01.620065   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 10:22:01.643224   12352 provision.go:87] duration metric: took 341.376852ms to configureAuth
	I0520 10:22:01.643245   12352 buildroot.go:189] setting minikube options for container-runtime
	I0520 10:22:01.643395   12352 config.go:182] Loaded profile config "addons-807933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:22:01.643458   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:01.645898   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.646173   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.646195   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.646365   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:01.646514   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.646624   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.646705   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:01.646805   12352 main.go:141] libmachine: Using SSH client type: native
	I0520 10:22:01.646994   12352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0520 10:22:01.647009   12352 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:22:01.910427   12352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:22:01.910456   12352 main.go:141] libmachine: Checking connection to Docker...
	I0520 10:22:01.910465   12352 main.go:141] libmachine: (addons-807933) Calling .GetURL
	I0520 10:22:01.911672   12352 main.go:141] libmachine: (addons-807933) DBG | Using libvirt version 6000000
	I0520 10:22:01.913573   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.913899   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.913921   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.914116   12352 main.go:141] libmachine: Docker is up and running!
	I0520 10:22:01.914136   12352 main.go:141] libmachine: Reticulating splines...
	I0520 10:22:01.914144   12352 client.go:171] duration metric: took 24.929592731s to LocalClient.Create
	I0520 10:22:01.914169   12352 start.go:167] duration metric: took 24.92965939s to libmachine.API.Create "addons-807933"
	I0520 10:22:01.914179   12352 start.go:293] postStartSetup for "addons-807933" (driver="kvm2")
	I0520 10:22:01.914198   12352 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:22:01.914223   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:01.914460   12352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:22:01.914483   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:01.916363   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.916624   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.916655   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.916748   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:01.916907   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.917074   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:01.917196   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:01.999784   12352 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:22:02.003909   12352 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 10:22:02.003930   12352 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 10:22:02.003993   12352 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 10:22:02.004014   12352 start.go:296] duration metric: took 89.826536ms for postStartSetup
	I0520 10:22:02.004044   12352 main.go:141] libmachine: (addons-807933) Calling .GetConfigRaw
	I0520 10:22:02.004542   12352 main.go:141] libmachine: (addons-807933) Calling .GetIP
	I0520 10:22:02.006866   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.007185   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:02.007211   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.007467   12352 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/config.json ...
	I0520 10:22:02.007661   12352 start.go:128] duration metric: took 25.039930927s to createHost
	I0520 10:22:02.007684   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:02.009613   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.009902   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:02.009929   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.010001   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:02.010198   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:02.010335   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:02.010490   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:02.010597   12352 main.go:141] libmachine: Using SSH client type: native
	I0520 10:22:02.010757   12352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0520 10:22:02.010767   12352 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 10:22:02.117605   12352 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716200522.094602142
	
	I0520 10:22:02.117634   12352 fix.go:216] guest clock: 1716200522.094602142
	I0520 10:22:02.117645   12352 fix.go:229] Guest: 2024-05-20 10:22:02.094602142 +0000 UTC Remote: 2024-05-20 10:22:02.007672113 +0000 UTC m=+25.136645980 (delta=86.930029ms)
	I0520 10:22:02.117703   12352 fix.go:200] guest clock delta is within tolerance: 86.930029ms
	I0520 10:22:02.117715   12352 start.go:83] releasing machines lock for "addons-807933", held for 25.150089452s
	I0520 10:22:02.117743   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:02.118010   12352 main.go:141] libmachine: (addons-807933) Calling .GetIP
	I0520 10:22:02.120226   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.120468   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:02.120505   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.120582   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:02.121176   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:02.121355   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:02.121426   12352 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:22:02.121483   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:02.121564   12352 ssh_runner.go:195] Run: cat /version.json
	I0520 10:22:02.121595   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:02.123698   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.124019   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:02.124045   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.124207   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.124229   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:02.124407   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:02.124558   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:02.124611   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:02.124636   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.124667   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:02.124820   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:02.124971   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:02.125131   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:02.125242   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	W0520 10:22:02.226073   12352 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 10:22:02.226157   12352 ssh_runner.go:195] Run: systemctl --version
	I0520 10:22:02.232287   12352 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:22:02.391810   12352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 10:22:02.397827   12352 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 10:22:02.397882   12352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:22:02.414487   12352 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 10:22:02.414515   12352 start.go:494] detecting cgroup driver to use...
	I0520 10:22:02.414599   12352 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:22:02.431872   12352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:22:02.446114   12352 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:22:02.446173   12352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:22:02.459737   12352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:22:02.472848   12352 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:22:02.594036   12352 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:22:02.738348   12352 docker.go:233] disabling docker service ...
	I0520 10:22:02.738405   12352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:22:02.752143   12352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:22:02.765332   12352 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:22:02.885365   12352 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:22:03.004315   12352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:22:03.018879   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:22:03.037825   12352 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:22:03.037880   12352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.048182   12352 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:22:03.048244   12352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.058326   12352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.068331   12352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.078511   12352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:22:03.089130   12352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.099161   12352 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.116582   12352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.126637   12352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:22:03.135609   12352 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 10:22:03.135663   12352 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 10:22:03.148608   12352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:22:03.158172   12352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:22:03.267905   12352 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:22:03.403443   12352 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:22:03.403521   12352 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:22:03.408494   12352 start.go:562] Will wait 60s for crictl version
	I0520 10:22:03.408538   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:22:03.412131   12352 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:22:03.451794   12352 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 10:22:03.451899   12352 ssh_runner.go:195] Run: crio --version
	I0520 10:22:03.479305   12352 ssh_runner.go:195] Run: crio --version
	I0520 10:22:03.507771   12352 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 10:22:03.509123   12352 main.go:141] libmachine: (addons-807933) Calling .GetIP
	I0520 10:22:03.511510   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:03.511842   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:03.511871   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:03.512077   12352 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 10:22:03.516133   12352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:22:03.528472   12352 kubeadm.go:877] updating cluster {Name:addons-807933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-807933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 10:22:03.528599   12352 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:22:03.528659   12352 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:22:03.560373   12352 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 10:22:03.560487   12352 ssh_runner.go:195] Run: which lz4
	I0520 10:22:03.564446   12352 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 10:22:03.568574   12352 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 10:22:03.568599   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 10:22:04.882322   12352 crio.go:462] duration metric: took 1.317908901s to copy over tarball
	I0520 10:22:04.882388   12352 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 10:22:07.176719   12352 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.294308037s)
	I0520 10:22:07.176741   12352 crio.go:469] duration metric: took 2.2943906s to extract the tarball
	I0520 10:22:07.176750   12352 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 10:22:07.214678   12352 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:22:07.255971   12352 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:22:07.256000   12352 cache_images.go:84] Images are preloaded, skipping loading
	I0520 10:22:07.256010   12352 kubeadm.go:928] updating node { 192.168.39.86 8443 v1.30.1 crio true true} ...
	I0520 10:22:07.256141   12352 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-807933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-807933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:22:07.256217   12352 ssh_runner.go:195] Run: crio config
	I0520 10:22:07.305430   12352 cni.go:84] Creating CNI manager for ""
	I0520 10:22:07.305458   12352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 10:22:07.305480   12352 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 10:22:07.305505   12352 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.86 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-807933 NodeName:addons-807933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 10:22:07.305665   12352 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-807933"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 10:22:07.305731   12352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:22:07.315662   12352 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 10:22:07.315740   12352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 10:22:07.325323   12352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0520 10:22:07.341600   12352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:22:07.357804   12352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0520 10:22:07.374208   12352 ssh_runner.go:195] Run: grep 192.168.39.86	control-plane.minikube.internal$ /etc/hosts
	I0520 10:22:07.377818   12352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:22:07.389721   12352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:22:07.506821   12352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:22:07.524640   12352 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933 for IP: 192.168.39.86
	I0520 10:22:07.524670   12352 certs.go:194] generating shared ca certs ...
	I0520 10:22:07.524690   12352 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:07.524860   12352 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 10:22:07.859161   12352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt ...
	I0520 10:22:07.859192   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt: {Name:mkeca0cd0f9f687c315d5a3735bd94b1a83d9372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:07.859393   12352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key ...
	I0520 10:22:07.859408   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key: {Name:mk1adfa07dfdb8a07e4270fc6a839960f920bb18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:07.859505   12352 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 10:22:08.002033   12352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt ...
	I0520 10:22:08.002064   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt: {Name:mk10798fc9395015430d2b33bb5f4e7d2bba6c81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.002249   12352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key ...
	I0520 10:22:08.002266   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key: {Name:mk762fd9601e1dca9b2da8662366674c5d5778e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.002362   12352 certs.go:256] generating profile certs ...
	I0520 10:22:08.002430   12352 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.key
	I0520 10:22:08.002445   12352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt with IP's: []
	I0520 10:22:08.302306   12352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt ...
	I0520 10:22:08.302337   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: {Name:mk62198dfa51dceca363447b830b722c8dfe43cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.302533   12352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.key ...
	I0520 10:22:08.302553   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.key: {Name:mk54e198ed76401d154d64eca9152db2e37699d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.302653   12352 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.key.bb918124
	I0520 10:22:08.302679   12352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.crt.bb918124 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.86]
	I0520 10:22:08.364166   12352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.crt.bb918124 ...
	I0520 10:22:08.364193   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.crt.bb918124: {Name:mk645a54b93c57c3c359c77aba86b289ab5b5bbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.364370   12352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.key.bb918124 ...
	I0520 10:22:08.364387   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.key.bb918124: {Name:mk69018507f95882ab2d08281937ca12be65320f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.364478   12352 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.crt.bb918124 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.crt
	I0520 10:22:08.364571   12352 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.key.bb918124 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.key
	I0520 10:22:08.364635   12352 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.key
	I0520 10:22:08.364659   12352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.crt with IP's: []
	I0520 10:22:08.433556   12352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.crt ...
	I0520 10:22:08.433585   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.crt: {Name:mk83e9f6d29aa6cbc445f9297c8ad2ac57f2cafa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.433758   12352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.key ...
	I0520 10:22:08.433774   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.key: {Name:mk28d74a5926637e890a27b48152bcf69ab76f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.433987   12352 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 10:22:08.434025   12352 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:22:08.434061   12352 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:22:08.434095   12352 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 10:22:08.434632   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:22:08.467414   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:22:08.493807   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:22:08.521816   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 10:22:08.546906   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 10:22:08.571086   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 10:22:08.594923   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:22:08.619672   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:22:08.643539   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:22:08.667547   12352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 10:22:08.685139   12352 ssh_runner.go:195] Run: openssl version
	I0520 10:22:08.691090   12352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:22:08.701658   12352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:22:08.706358   12352 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:22:08.706412   12352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:22:08.712274   12352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:22:08.723289   12352 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:22:08.727589   12352 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 10:22:08.727639   12352 kubeadm.go:391] StartCluster: {Name:addons-807933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-807933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:22:08.727720   12352 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 10:22:08.727760   12352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 10:22:08.763557   12352 cri.go:89] found id: ""
	I0520 10:22:08.763622   12352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 10:22:08.773781   12352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 10:22:08.783203   12352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 10:22:08.792601   12352 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 10:22:08.792623   12352 kubeadm.go:156] found existing configuration files:
	
	I0520 10:22:08.792669   12352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 10:22:08.802078   12352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 10:22:08.802192   12352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 10:22:08.811829   12352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 10:22:08.820648   12352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 10:22:08.820704   12352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 10:22:08.829829   12352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 10:22:08.838729   12352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 10:22:08.838790   12352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 10:22:08.848329   12352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 10:22:08.857258   12352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 10:22:08.857311   12352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 10:22:08.866624   12352 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 10:22:08.925514   12352 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 10:22:08.925655   12352 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 10:22:09.069923   12352 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 10:22:09.070048   12352 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 10:22:09.070184   12352 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 10:22:09.286467   12352 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 10:22:09.491823   12352 out.go:204]   - Generating certificates and keys ...
	I0520 10:22:09.491959   12352 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 10:22:09.492029   12352 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 10:22:09.500626   12352 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 10:22:09.911390   12352 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 10:22:10.318819   12352 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 10:22:10.463944   12352 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 10:22:10.823134   12352 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 10:22:10.823372   12352 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-807933 localhost] and IPs [192.168.39.86 127.0.0.1 ::1]
	I0520 10:22:10.943427   12352 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 10:22:10.943616   12352 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-807933 localhost] and IPs [192.168.39.86 127.0.0.1 ::1]
	I0520 10:22:11.044533   12352 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 10:22:11.143164   12352 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 10:22:11.393888   12352 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 10:22:11.394065   12352 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 10:22:11.508659   12352 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 10:22:11.625588   12352 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 10:22:11.868433   12352 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 10:22:12.055937   12352 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 10:22:12.265190   12352 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 10:22:12.265898   12352 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 10:22:12.270025   12352 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 10:22:12.271917   12352 out.go:204]   - Booting up control plane ...
	I0520 10:22:12.272014   12352 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 10:22:12.272107   12352 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 10:22:12.272163   12352 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 10:22:12.288002   12352 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 10:22:12.288136   12352 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 10:22:12.288184   12352 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 10:22:12.406687   12352 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 10:22:12.406785   12352 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 10:22:13.408710   12352 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.003056036s
	I0520 10:22:13.408826   12352 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 10:22:17.908640   12352 kubeadm.go:309] [api-check] The API server is healthy after 4.502208626s
	I0520 10:22:17.922452   12352 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 10:22:17.947167   12352 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 10:22:17.975764   12352 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 10:22:17.975982   12352 kubeadm.go:309] [mark-control-plane] Marking the node addons-807933 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 10:22:17.988329   12352 kubeadm.go:309] [bootstrap-token] Using token: g6iqut.p8aqjgcw9foiwg7c
	I0520 10:22:17.989761   12352 out.go:204]   - Configuring RBAC rules ...
	I0520 10:22:17.989911   12352 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 10:22:17.995026   12352 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 10:22:18.005577   12352 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 10:22:18.013485   12352 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 10:22:18.017946   12352 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 10:22:18.021627   12352 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 10:22:18.317607   12352 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 10:22:18.766458   12352 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 10:22:19.315532   12352 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 10:22:19.315554   12352 kubeadm.go:309] 
	I0520 10:22:19.315620   12352 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 10:22:19.315629   12352 kubeadm.go:309] 
	I0520 10:22:19.315764   12352 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 10:22:19.315792   12352 kubeadm.go:309] 
	I0520 10:22:19.315837   12352 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 10:22:19.315917   12352 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 10:22:19.315988   12352 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 10:22:19.315998   12352 kubeadm.go:309] 
	I0520 10:22:19.316111   12352 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 10:22:19.316130   12352 kubeadm.go:309] 
	I0520 10:22:19.316197   12352 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 10:22:19.316209   12352 kubeadm.go:309] 
	I0520 10:22:19.316289   12352 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 10:22:19.316389   12352 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 10:22:19.316491   12352 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 10:22:19.316501   12352 kubeadm.go:309] 
	I0520 10:22:19.316624   12352 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 10:22:19.316764   12352 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 10:22:19.316775   12352 kubeadm.go:309] 
	I0520 10:22:19.316889   12352 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token g6iqut.p8aqjgcw9foiwg7c \
	I0520 10:22:19.317066   12352 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 10:22:19.317103   12352 kubeadm.go:309] 	--control-plane 
	I0520 10:22:19.317114   12352 kubeadm.go:309] 
	I0520 10:22:19.317234   12352 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 10:22:19.317246   12352 kubeadm.go:309] 
	I0520 10:22:19.317357   12352 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token g6iqut.p8aqjgcw9foiwg7c \
	I0520 10:22:19.317497   12352 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 10:22:19.317699   12352 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 10:22:19.317792   12352 cni.go:84] Creating CNI manager for ""
	I0520 10:22:19.317810   12352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 10:22:19.319550   12352 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 10:22:19.320758   12352 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 10:22:19.331716   12352 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 10:22:19.354330   12352 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 10:22:19.354424   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:19.354469   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-807933 minikube.k8s.io/updated_at=2024_05_20T10_22_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=addons-807933 minikube.k8s.io/primary=true
	I0520 10:22:19.387027   12352 ops.go:34] apiserver oom_adj: -16
	I0520 10:22:19.500165   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:20.000216   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:20.500243   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:21.000961   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:21.501052   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:22.000376   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:22.500339   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:23.000816   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:23.500724   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:24.001220   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:24.501184   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:25.000238   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:25.500972   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:26.000661   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:26.500849   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:27.000235   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:27.500859   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:28.000338   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:28.500484   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:29.001352   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:29.500913   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:30.001189   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:30.500882   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:31.000826   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:31.500267   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:32.000991   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:32.501076   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:32.584586   12352 kubeadm.go:1107] duration metric: took 13.230228729s to wait for elevateKubeSystemPrivileges
	W0520 10:22:32.584624   12352 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 10:22:32.584635   12352 kubeadm.go:393] duration metric: took 23.856999493s to StartCluster
	I0520 10:22:32.584657   12352 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:32.584801   12352 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:22:32.585261   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:32.585460   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 10:22:32.585493   12352 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:22:32.587599   12352 out.go:177] * Verifying Kubernetes components...
	I0520 10:22:32.585548   12352 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0520 10:22:32.585664   12352 config.go:182] Loaded profile config "addons-807933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:22:32.588989   12352 addons.go:69] Setting cloud-spanner=true in profile "addons-807933"
	I0520 10:22:32.588999   12352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:22:32.589010   12352 addons.go:69] Setting inspektor-gadget=true in profile "addons-807933"
	I0520 10:22:32.589026   12352 addons.go:69] Setting volumesnapshots=true in profile "addons-807933"
	I0520 10:22:32.589047   12352 addons.go:234] Setting addon cloud-spanner=true in "addons-807933"
	I0520 10:22:32.589058   12352 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-807933"
	I0520 10:22:32.589006   12352 addons.go:69] Setting metrics-server=true in profile "addons-807933"
	I0520 10:22:32.589079   12352 addons.go:69] Setting ingress=true in profile "addons-807933"
	I0520 10:22:32.589090   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.589097   12352 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-807933"
	I0520 10:22:32.589089   12352 addons.go:69] Setting gcp-auth=true in profile "addons-807933"
	I0520 10:22:32.589107   12352 addons.go:234] Setting addon metrics-server=true in "addons-807933"
	I0520 10:22:32.589093   12352 addons.go:69] Setting storage-provisioner=true in profile "addons-807933"
	I0520 10:22:32.589118   12352 addons.go:234] Setting addon ingress=true in "addons-807933"
	I0520 10:22:32.589126   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.589133   12352 mustload.go:65] Loading cluster: addons-807933
	I0520 10:22:32.589117   12352 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-807933"
	I0520 10:22:32.589118   12352 addons.go:69] Setting ingress-dns=true in profile "addons-807933"
	I0520 10:22:32.589149   12352 addons.go:234] Setting addon storage-provisioner=true in "addons-807933"
	I0520 10:22:32.589153   12352 addons.go:69] Setting registry=true in profile "addons-807933"
	I0520 10:22:32.589417   12352 addons.go:234] Setting addon registry=true in "addons-807933"
	I0520 10:22:32.589491   12352 addons.go:69] Setting default-storageclass=true in profile "addons-807933"
	I0520 10:22:32.589527   12352 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-807933"
	I0520 10:22:32.589585   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.589016   12352 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-807933"
	I0520 10:22:32.589050   12352 addons.go:234] Setting addon volumesnapshots=true in "addons-807933"
	I0520 10:22:32.589692   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.589685   12352 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-807933"
	I0520 10:22:32.589430   12352 addons.go:234] Setting addon ingress-dns=true in "addons-807933"
	I0520 10:22:32.589760   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.589496   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.590010   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.590051   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.590061   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.590089   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.589058   12352 addons.go:234] Setting addon inspektor-gadget=true in "addons-807933"
	I0520 10:22:32.590132   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.589143   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.590151   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.589154   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.590260   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.590302   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.590370   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.590397   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.589377   12352 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-807933"
	I0520 10:22:32.590523   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.590531   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.590548   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.590563   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.589068   12352 addons.go:69] Setting helm-tiller=true in profile "addons-807933"
	I0520 10:22:32.588996   12352 addons.go:69] Setting yakd=true in profile "addons-807933"
	I0520 10:22:32.590614   12352 addons.go:234] Setting addon helm-tiller=true in "addons-807933"
	I0520 10:22:32.589645   12352 config.go:182] Loaded profile config "addons-807933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:22:32.590636   12352 addons.go:234] Setting addon yakd=true in "addons-807933"
	I0520 10:22:32.591094   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.591138   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.591183   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.591215   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.591436   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.591215   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.591678   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.591713   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.592078   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.592270   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.592324   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.592340   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.592960   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.593029   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.610909   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I0520 10:22:32.612339   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I0520 10:22:32.613293   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36027
	I0520 10:22:32.613455   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0520 10:22:32.613829   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.613891   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.613911   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.613843   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
	I0520 10:22:32.613852   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45281
	I0520 10:22:32.613869   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.614575   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.614601   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.614647   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.614672   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.614747   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.614835   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.614850   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.614943   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.615009   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.615044   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.615280   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.615302   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.615467   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.615495   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.615586   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.615813   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.615854   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.615926   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.615936   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.615949   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.615971   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.616025   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.616103   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.616149   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.616235   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.617676   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.617718   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.627681   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.627736   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.627787   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.627824   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.628140   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.628540   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.628571   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.628756   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.628781   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.628838   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.634540   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.634583   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.638525   12352 addons.go:234] Setting addon default-storageclass=true in "addons-807933"
	I0520 10:22:32.638601   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.639011   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.639194   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.647306   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0520 10:22:32.647751   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.648276   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.648295   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.648702   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.649353   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.649382   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.650768   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0520 10:22:32.651120   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.651593   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.651614   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.651922   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.652085   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.653923   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I0520 10:22:32.654108   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.656217   12352 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0520 10:22:32.654507   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.657511   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0520 10:22:32.657968   12352 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0520 10:22:32.657981   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0520 10:22:32.657998   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.659129   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.659148   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.659493   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.659687   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.660222   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.660259   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.660456   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0520 10:22:32.660612   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0520 10:22:32.661251   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.661341   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.661777   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.661800   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.661859   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.661876   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.661989   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.662151   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.662298   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.662415   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.662687   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.663146   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.663164   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.663437   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.663452   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.663848   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.664169   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.664278   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.664449   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.664616   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.664673   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.665116   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.666568   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.668481   12352 out.go:177]   - Using image docker.io/registry:2.8.3
	I0520 10:22:32.667798   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I0520 10:22:32.668060   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.669936   12352 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0520 10:22:32.671487   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0520 10:22:32.670381   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.671435   12352 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0520 10:22:32.672615   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0520 10:22:32.672633   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.674187   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0520 10:22:32.673194   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.675447   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.676789   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0520 10:22:32.675963   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.676955   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.677400   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I0520 10:22:32.677683   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.678139   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0520 10:22:32.678277   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.678447   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.678863   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.679713   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.679890   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.679901   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0520 10:22:32.679229   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
	I0520 10:22:32.680115   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.681131   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0520 10:22:32.682271   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0520 10:22:32.681332   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.681558   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.681906   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.684505   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0520 10:22:32.685651   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0520 10:22:32.685667   12352 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0520 10:22:32.685684   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.684577   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.685740   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.684398   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.685779   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.685934   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
	I0520 10:22:32.686121   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.686302   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.687471   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.687511   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.688009   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.688033   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.688251   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.688287   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.688370   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.688546   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.689237   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.691012   12352 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0520 10:22:32.692806   12352 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 10:22:32.692831   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0520 10:22:32.692850   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.690625   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.692944   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.692974   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.690721   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.691185   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.691777   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I0520 10:22:32.693259   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.693599   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.693795   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.694079   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.694178   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33825
	I0520 10:22:32.695630   12352 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0
	I0520 10:22:32.694643   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.694677   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.696650   12352 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0520 10:22:32.696672   12352 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0520 10:22:32.696701   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.696754   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.696895   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38499
	I0520 10:22:32.697197   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.697219   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.697288   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.697630   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.697762   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.697778   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.697826   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.697833   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.698381   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.698421   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.698651   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.698658   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.698671   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.698697   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.698800   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.699042   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.699069   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.699107   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.699139   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.699160   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.699170   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.700145   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.700507   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.700531   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.700650   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.700795   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.700895   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.701003   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.705864   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44033
	I0520 10:22:32.706290   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.707168   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.707183   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.707519   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.707698   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.709871   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.712181   12352 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 10:22:32.711595   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37637
	I0520 10:22:32.711768   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0520 10:22:32.713605   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I0520 10:22:32.714528   12352 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:22:32.714811   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 10:22:32.714830   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.715624   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.715939   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.716046   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.716535   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.716551   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.716688   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.716699   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.716808   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.716818   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.717195   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.717250   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0520 10:22:32.717345   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.717592   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.717653   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.717716   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.717801   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0520 10:22:32.717909   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.718508   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.718516   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.718654   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.718670   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.719067   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.719085   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.719153   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34569
	I0520 10:22:32.719595   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.719783   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.719878   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.720278   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.720837   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.720893   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.720905   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.721127   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.722651   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0520 10:22:32.723803   12352 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0520 10:22:32.723816   12352 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0520 10:22:32.723829   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.723058   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.721833   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.721723   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.723090   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.723938   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.723964   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.723306   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.723506   12352 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-807933"
	I0520 10:22:32.724038   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.725967   12352 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0520 10:22:32.724538   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.724671   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.725433   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.726833   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40341
	I0520 10:22:32.727142   12352 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 10:22:32.727171   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.727282   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.727816   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.728296   12352 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0520 10:22:32.729522   12352 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 10:22:32.728340   12352 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 10:22:32.728408   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.728413   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.728561   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.728648   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.728763   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.729623   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.729665   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0520 10:22:32.729682   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.729720   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.730112   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.730126   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.730182   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.730218   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.730326   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.730760   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.731115   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.733506   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.734269   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.734575   12352 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 10:22:32.734587   12352 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 10:22:32.734604   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.734810   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.735553   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0520 10:22:32.735566   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.735602   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.735619   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.735620   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0520 10:22:32.735730   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.735748   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.735752   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.735878   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.735934   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.736142   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.736228   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.736280   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.736341   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.736677   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.736694   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.736910   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.737076   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.737321   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.737506   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.737556   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.737758   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.737780   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.737830   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.737857   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.738060   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.738074   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.738208   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.738228   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.738343   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.738545   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.739487   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.741898   12352 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0520 10:22:32.741234   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.743116   12352 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0520 10:22:32.743126   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0520 10:22:32.743140   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.744557   12352 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:22:32.746224   12352 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0520 10:22:32.745851   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.746414   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.748695   12352 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:22:32.748750   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.750013   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.748881   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.750189   12352 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 10:22:32.750202   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0520 10:22:32.750218   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.750289   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.750420   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.751487   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35445
	I0520 10:22:32.751865   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.752595   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.752667   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.753502   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.753561   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.754082   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.754113   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.754331   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0520 10:22:32.754348   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.754390   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.754410   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.754522   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.754659   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.754773   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.769104   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34913
	I0520 10:22:32.769412   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.770308   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.770336   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.770601   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.770778   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.772196   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.773981   12352 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0520 10:22:32.775171   12352 out.go:177]   - Using image docker.io/busybox:stable
	I0520 10:22:32.776400   12352 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 10:22:32.776415   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0520 10:22:32.776434   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.777420   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.777942   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.777963   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.778336   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.778567   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.779554   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.779935   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.779967   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.779978   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.780117   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.781457   12352 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0520 10:22:32.780293   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.782618   12352 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0520 10:22:32.782631   12352 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0520 10:22:32.782645   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.782779   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.783308   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.785565   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.785985   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.786044   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.786154   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.786305   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.786434   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.786571   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	W0520 10:22:32.788414   12352 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49554->192.168.39.86:22: read: connection reset by peer
	I0520 10:22:32.788442   12352 retry.go:31] will retry after 133.152417ms: ssh: handshake failed: read tcp 192.168.39.1:49554->192.168.39.86:22: read: connection reset by peer
	W0520 10:22:32.788501   12352 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49558->192.168.39.86:22: read: connection reset by peer
	I0520 10:22:32.788511   12352 retry.go:31] will retry after 160.532242ms: ssh: handshake failed: read tcp 192.168.39.1:49558->192.168.39.86:22: read: connection reset by peer
	W0520 10:22:32.922369   12352 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49570->192.168.39.86:22: read: connection reset by peer
	I0520 10:22:32.922403   12352 retry.go:31] will retry after 206.830679ms: ssh: handshake failed: read tcp 192.168.39.1:49570->192.168.39.86:22: read: connection reset by peer
	I0520 10:22:33.117706   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0520 10:22:33.120159   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 10:22:33.135394   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:22:33.158944   12352 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0520 10:22:33.158969   12352 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0520 10:22:33.190375   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 10:22:33.210227   12352 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 10:22:33.210254   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0520 10:22:33.233955   12352 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0520 10:22:33.233988   12352 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0520 10:22:33.264702   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0520 10:22:33.264727   12352 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0520 10:22:33.329426   12352 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0520 10:22:33.329451   12352 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0520 10:22:33.339396   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 10:22:33.345731   12352 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0520 10:22:33.345753   12352 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0520 10:22:33.345964   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 10:22:33.387851   12352 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0520 10:22:33.387868   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0520 10:22:33.393025   12352 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 10:22:33.393044   12352 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 10:22:33.394478   12352 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0520 10:22:33.394491   12352 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0520 10:22:33.408530   12352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:22:33.408591   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 10:22:33.516030   12352 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0520 10:22:33.516050   12352 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0520 10:22:33.527100   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0520 10:22:33.527118   12352 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0520 10:22:33.549292   12352 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 10:22:33.549315   12352 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0520 10:22:33.565829   12352 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0520 10:22:33.565856   12352 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0520 10:22:33.604854   12352 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0520 10:22:33.604880   12352 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0520 10:22:33.611697   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0520 10:22:33.629008   12352 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 10:22:33.629025   12352 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 10:22:33.675724   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0520 10:22:33.675741   12352 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0520 10:22:33.721095   12352 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0520 10:22:33.721120   12352 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0520 10:22:33.739098   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 10:22:33.769929   12352 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0520 10:22:33.769960   12352 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0520 10:22:33.780537   12352 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0520 10:22:33.780555   12352 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0520 10:22:33.803665   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 10:22:33.812642   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 10:22:33.913251   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0520 10:22:33.913275   12352 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0520 10:22:33.936530   12352 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0520 10:22:33.936561   12352 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0520 10:22:33.975754   12352 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0520 10:22:33.975795   12352 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0520 10:22:34.022207   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0520 10:22:34.022236   12352 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0520 10:22:34.090872   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0520 10:22:34.090894   12352 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0520 10:22:34.110464   12352 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0520 10:22:34.110483   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0520 10:22:34.252052   12352 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0520 10:22:34.252079   12352 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0520 10:22:34.271402   12352 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:22:34.271422   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0520 10:22:34.314839   12352 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0520 10:22:34.314868   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0520 10:22:34.414343   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:22:34.445921   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0520 10:22:34.477791   12352 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0520 10:22:34.477829   12352 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0520 10:22:34.512361   12352 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 10:22:34.512381   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0520 10:22:34.698878   12352 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0520 10:22:34.698902   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0520 10:22:34.944166   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 10:22:35.179122   12352 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0520 10:22:35.179144   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0520 10:22:35.578482   12352 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 10:22:35.578514   12352 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0520 10:22:35.769379   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.651632541s)
	I0520 10:22:35.769420   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:35.769417   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.649232846s)
	I0520 10:22:35.769434   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:35.769457   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:35.769474   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:35.769740   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:35.769767   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:35.769775   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:35.769780   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:35.769790   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:35.769799   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:35.769864   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:35.769875   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:35.769883   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:35.769913   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:35.770010   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:35.770052   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:35.770065   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:35.770228   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:35.770243   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:35.909675   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 10:22:38.379175   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.24374315s)
	I0520 10:22:38.379230   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:38.379243   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:38.379532   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:38.379557   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:38.379567   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:38.379575   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:38.379615   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:38.379863   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:38.379886   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:39.758535   12352 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0520 10:22:39.758569   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:39.762466   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:39.762942   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:39.762983   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:39.763166   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:39.763376   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:39.763549   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:39.763697   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:40.289670   12352 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0520 10:22:40.538788   12352 addons.go:234] Setting addon gcp-auth=true in "addons-807933"
	I0520 10:22:40.538855   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:40.539291   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:40.539339   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:40.554395   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36717
	I0520 10:22:40.554820   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:40.555399   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:40.555424   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:40.555780   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:40.556230   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:40.556254   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:40.571870   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
	I0520 10:22:40.572326   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:40.572804   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:40.572828   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:40.573129   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:40.573307   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:40.575180   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:40.575403   12352 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0520 10:22:40.575431   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:40.578239   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:40.578650   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:40.578678   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:40.578854   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:40.579006   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:40.579133   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:40.579239   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:41.465716   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.275298427s)
	I0520 10:22:41.465753   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.126328183s)
	I0520 10:22:41.465769   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.465783   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.465796   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.465805   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.465802   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.119810706s)
	I0520 10:22:41.465830   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.465839   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.465861   12352 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.05725306s)
	I0520 10:22:41.465882   12352 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.057325987s)
	I0520 10:22:41.465917   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.854195453s)
	I0520 10:22:41.465938   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.465950   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466041   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.726923341s)
	I0520 10:22:41.466065   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466074   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466141   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.662453666s)
	I0520 10:22:41.466158   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466167   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466213   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.466247   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466255   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466255   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.653590256s)
	I0520 10:22:41.466263   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466273   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466282   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466283   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466290   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466319   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466343   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466353   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466362   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466369   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466442   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.052066798s)
	W0520 10:22:41.466469   12352 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 10:22:41.466500   12352 retry.go:31] will retry after 219.080102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 10:22:41.466552   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466560   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.020608424s)
	I0520 10:22:41.466565   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466577   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466588   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466607   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466614   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466699   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.522504263s)
	I0520 10:22:41.466717   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466726   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466744   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466809   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.466834   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466841   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466850   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466857   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466872   12352 node_ready.go:35] waiting up to 6m0s for node "addons-807933" to be "Ready" ...
	I0520 10:22:41.466895   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.466913   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466920   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466927   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466932   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466954   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.466973   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466987   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466996   12352 addons.go:470] Verifying addon registry=true in "addons-807933"
	I0520 10:22:41.465879   12352 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 10:22:41.469564   12352 out.go:177] * Verifying registry addon...
	I0520 10:22:41.466933   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.467307   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.467346   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.467359   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.467374   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.467993   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.468010   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.468047   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.468071   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.468968   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.468983   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.469730   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.469746   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.469981   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.470010   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.471422   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471438   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471480   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471508   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471524   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.471535   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.472913   12352 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-807933 service yakd-dashboard -n yakd-dashboard
	
	I0520 10:22:41.471590   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471609   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471501   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471836   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.471861   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.471881   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.471909   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.472598   12352 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0520 10:22:41.474179   12352 addons.go:470] Verifying addon ingress=true in "addons-807933"
	I0520 10:22:41.475608   12352 out.go:177] * Verifying ingress addon...
	I0520 10:22:41.474240   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.474253   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.475688   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.474256   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.474265   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.475787   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.475764   12352 addons.go:470] Verifying addon metrics-server=true in "addons-807933"
	I0520 10:22:41.475927   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.477197   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.475956   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.476087   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.477355   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.476120   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.477805   12352 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0520 10:22:41.485843   12352 node_ready.go:49] node "addons-807933" has status "Ready":"True"
	I0520 10:22:41.485858   12352 node_ready.go:38] duration metric: took 18.970366ms for node "addons-807933" to be "Ready" ...
	I0520 10:22:41.485867   12352 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:22:41.508747   12352 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0520 10:22:41.508771   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:41.509070   12352 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0520 10:22:41.509094   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:41.534458   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.534477   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.534731   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.534745   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	W0520 10:22:41.534863   12352 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0520 10:22:41.544283   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.544301   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.544623   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.544582   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.544692   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.559560   12352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b4ffw" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.601936   12352 pod_ready.go:92] pod "coredns-7db6d8ff4d-b4ffw" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:41.601959   12352 pod_ready.go:81] duration metric: took 42.368414ms for pod "coredns-7db6d8ff4d-b4ffw" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.601969   12352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mq6st" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.625060   12352 pod_ready.go:92] pod "coredns-7db6d8ff4d-mq6st" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:41.625080   12352 pod_ready.go:81] duration metric: took 23.104459ms for pod "coredns-7db6d8ff4d-mq6st" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.625089   12352 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.641186   12352 pod_ready.go:92] pod "etcd-addons-807933" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:41.641210   12352 pod_ready.go:81] duration metric: took 16.112316ms for pod "etcd-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.641222   12352 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.686196   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:22:41.724525   12352 pod_ready.go:92] pod "kube-apiserver-addons-807933" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:41.724551   12352 pod_ready.go:81] duration metric: took 83.321214ms for pod "kube-apiserver-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.724564   12352 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.870202   12352 pod_ready.go:92] pod "kube-controller-manager-addons-807933" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:41.870234   12352 pod_ready.go:81] duration metric: took 145.662391ms for pod "kube-controller-manager-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.870249   12352 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-twglf" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.970135   12352 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-807933" context rescaled to 1 replicas
	I0520 10:22:41.979239   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:41.981548   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:42.271296   12352 pod_ready.go:92] pod "kube-proxy-twglf" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:42.271326   12352 pod_ready.go:81] duration metric: took 401.06798ms for pod "kube-proxy-twglf" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:42.271338   12352 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:42.484580   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:42.500540   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:42.695564   12352 pod_ready.go:92] pod "kube-scheduler-addons-807933" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:42.695584   12352 pod_ready.go:81] duration metric: took 424.238743ms for pod "kube-scheduler-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:42.695593   12352 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:42.724721   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.814984356s)
	I0520 10:22:42.724756   12352 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.149325927s)
	I0520 10:22:42.724772   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:42.724787   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:42.726163   12352 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:22:42.725037   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:42.725067   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:42.727505   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:42.727520   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:42.727530   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:42.728730   12352 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0520 10:22:42.729870   12352 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0520 10:22:42.729883   12352 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0520 10:22:42.727856   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:42.729964   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:42.729988   12352 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-807933"
	I0520 10:22:42.727886   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:42.731320   12352 out.go:177] * Verifying csi-hostpath-driver addon...
	I0520 10:22:42.733348   12352 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0520 10:22:42.773943   12352 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0520 10:22:42.773986   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:42.879719   12352 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0520 10:22:42.879745   12352 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0520 10:22:42.975288   12352 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 10:22:42.975315   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0520 10:22:42.981132   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:42.984485   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:43.040168   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 10:22:43.251940   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:43.484740   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:43.489828   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:43.738938   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:43.982229   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:43.983304   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:43.994771   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.308534911s)
	I0520 10:22:43.994819   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:43.994834   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:43.995120   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:43.995169   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:43.995185   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:43.995194   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:43.995203   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:43.995494   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:43.995535   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:44.239430   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:44.529659   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:44.530277   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:44.657226   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.617014007s)
	I0520 10:22:44.657287   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:44.657305   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:44.657609   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:44.657675   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:44.657704   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:44.657717   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:44.657725   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:44.657940   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:44.658009   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:44.657967   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:44.659861   12352 addons.go:470] Verifying addon gcp-auth=true in "addons-807933"
	I0520 10:22:44.661258   12352 out.go:177] * Verifying gcp-auth addon...
	I0520 10:22:44.663199   12352 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0520 10:22:44.686171   12352 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0520 10:22:44.686191   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:44.725124   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:44.753198   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:44.979455   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:44.982723   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:45.166734   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:45.240027   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:45.480092   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:45.482343   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:45.666100   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:45.737944   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:45.979360   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:45.981956   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:46.166635   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:46.239342   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:46.479465   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:46.481618   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:46.666791   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:46.743167   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:47.110218   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:47.110228   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:47.167132   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:47.201134   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:47.239200   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:47.479034   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:47.482045   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:47.668111   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:47.739035   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:47.978894   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:47.981598   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:48.167381   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:48.239524   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:48.479144   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:48.482156   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:48.667295   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:48.739783   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:48.978761   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:48.981927   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:49.171980   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:49.201337   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:49.239619   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:49.480782   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:49.482043   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:49.668055   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:49.739043   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:49.980820   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:49.985454   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:50.167643   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:50.241918   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:50.478781   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:50.481600   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:50.667221   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:50.740384   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:50.979682   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:50.981826   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:51.166965   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:51.239887   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:51.478225   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:51.481897   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:51.666859   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:51.701823   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:51.737839   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:51.979123   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:51.982076   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:52.167154   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:52.238329   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:52.479494   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:52.481888   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:52.667102   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:52.738647   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:52.980053   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:52.982183   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:53.169738   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:53.239797   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:53.480493   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:53.489412   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:53.667382   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:53.703306   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:53.740013   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:53.981206   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:53.983346   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:54.167063   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:54.238482   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:54.480274   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:54.482461   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:54.667565   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:54.740043   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:54.980884   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:54.985512   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:55.166631   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:55.238273   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:55.481443   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:55.483677   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:55.667239   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:55.738477   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:55.979513   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:55.981372   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:56.167373   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:56.201876   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:56.239033   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:56.479602   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:56.482929   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:56.668118   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:56.739686   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:56.980412   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:56.982208   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:57.171607   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:57.238535   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:57.479594   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:57.483253   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:57.667704   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:57.749859   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:57.978925   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:57.981796   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:58.167356   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:58.238766   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:58.478897   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:58.481719   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:58.668671   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:58.705967   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:58.738468   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:58.979738   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:58.981844   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:59.166589   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:59.239316   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:59.480150   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:59.484349   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:59.667328   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:59.738320   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:59.979593   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:59.981674   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:00.166829   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:00.240092   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:00.479517   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:00.482639   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:00.667598   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:00.738827   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:00.979503   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:00.982103   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:01.168338   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:01.201955   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:01.241515   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:01.478758   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:01.481913   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:01.667276   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:01.739623   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:01.978485   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:01.981394   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:02.167305   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:02.240508   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:02.479430   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:02.482088   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:02.667830   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:02.738987   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:02.979859   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:02.982795   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:03.167471   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:03.203230   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:03.241622   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:03.479197   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:03.481651   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:03.668778   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:03.740761   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:03.978165   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:03.982020   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:04.167600   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:04.238311   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:04.480871   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:04.482340   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:04.667220   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:04.739742   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:04.997807   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:04.998243   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:05.167289   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:05.238263   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:05.759663   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:05.759742   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:05.761525   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:05.762276   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:05.764312   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:05.978841   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:05.982345   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:06.167826   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:06.238242   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:06.479485   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:06.482099   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:06.667431   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:06.738764   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:06.979060   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:06.983469   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:07.168989   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:07.238805   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:07.478534   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:07.481363   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:07.667770   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:07.739153   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:07.979291   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:07.981556   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:08.166497   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:08.201748   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:08.242437   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:08.483324   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:08.483343   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:08.668184   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:08.737906   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:08.978524   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:08.981225   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:09.167158   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:09.239871   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:09.478699   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:09.481135   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:09.666896   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:09.738985   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:09.979685   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:09.988277   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:10.167321   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:10.239769   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:10.479783   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:10.483417   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:10.668016   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:10.700714   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:10.738094   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:10.980718   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:10.985372   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:11.167271   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:11.242604   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:11.479334   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:11.482698   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:11.667553   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:11.742304   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:11.979371   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:11.982513   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:12.167418   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:12.244207   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:12.479818   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:12.481930   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:12.668556   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:12.703022   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:12.740499   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:12.978958   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:12.983157   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:13.167500   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:13.239263   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:13.481367   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:13.482424   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:13.667359   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:13.738678   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:13.980672   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:13.982751   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:14.167024   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:14.239293   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:14.479327   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:14.481951   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:14.666924   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:14.703506   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:14.743002   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:14.980119   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:14.983759   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:15.166620   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:15.239603   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:15.479562   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:15.484668   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:15.667419   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:15.738504   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:15.980115   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:15.983208   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:16.169092   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:16.239991   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:16.478822   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:16.481577   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:16.667816   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:16.704561   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:16.740846   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:16.978939   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:16.981459   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:17.166801   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:17.238861   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:17.480695   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:17.482746   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:17.666827   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:17.738990   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:17.979533   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:17.982829   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:18.168334   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:18.239033   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:18.752632   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:18.752776   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:18.753364   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:18.756514   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:18.757069   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:18.979483   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:18.982318   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:19.167327   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:19.239159   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:19.479577   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:19.482907   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:19.666308   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:19.753670   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:19.979262   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:19.982103   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:20.167121   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:20.239582   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:20.480080   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:20.483352   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:20.667290   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:20.742021   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:20.979061   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:20.981839   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:21.167082   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:21.209252   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:21.242578   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:21.479930   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:21.484868   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:21.667871   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:21.739264   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:21.979088   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:21.982174   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:22.169691   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:22.241802   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:22.478903   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:22.481974   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:22.667264   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:22.746759   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:22.979920   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:22.982330   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:23.167991   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:23.239453   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:23.480495   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:23.482927   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:23.667604   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:23.701908   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:23.739101   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:23.978936   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:23.982120   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:24.167077   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:24.239750   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:24.478397   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:24.480949   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:24.667020   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:24.742354   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:24.982937   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:24.988860   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:25.166976   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:25.238147   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:25.478929   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:25.481768   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:25.666302   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:25.739359   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:25.979326   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:25.981591   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:26.167827   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:26.200798   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:26.238645   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:26.479657   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:26.482419   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:26.668870   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:26.749954   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:26.980732   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:26.983253   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:27.168167   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:27.241227   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:27.480413   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:27.482382   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:27.667571   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:27.739108   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:27.978955   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:27.981278   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:28.167444   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:28.202188   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:28.241747   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:28.479522   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:28.482377   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:28.903376   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:28.904400   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:28.991109   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:28.993292   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:29.168820   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:29.238887   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:29.478478   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:29.481966   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:29.667390   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:29.738509   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:29.979209   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:29.982434   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:30.167256   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:30.238624   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:30.478901   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:30.481624   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:30.671313   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:30.706224   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:30.738854   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:30.979228   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:30.981336   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:31.167330   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:31.239129   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:31.478744   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:31.481141   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:31.667194   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:31.738462   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:31.980050   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:31.988691   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:32.168175   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:32.238533   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:32.479258   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:32.482271   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:32.667418   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:32.738697   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:32.979722   12352 kapi.go:107] duration metric: took 51.507119416s to wait for kubernetes.io/minikube-addons=registry ...
	I0520 10:23:32.982214   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:33.167676   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:33.202214   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:33.238869   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:33.482341   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:33.667098   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:33.737876   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:33.982589   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:34.167558   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:34.238495   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:34.482253   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:34.667638   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:34.738266   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:34.982517   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:35.168169   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:35.238626   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:35.482190   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:35.669319   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:35.700802   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:35.737977   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:35.985599   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:36.167172   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:36.241755   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:36.483506   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:36.667177   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:36.739242   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:36.982955   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:37.166978   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:37.240754   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:37.482284   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:37.666991   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:37.715101   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:37.743173   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:37.982634   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:38.168065   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:38.237822   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:38.932611   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:38.932995   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:38.936577   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:38.984194   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:39.169994   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:39.239324   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:39.484512   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:39.668697   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:39.741355   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:39.982199   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:40.167788   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:40.202233   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:40.242781   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:40.481746   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:40.667657   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:40.738321   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:40.983030   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:41.166813   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:41.241471   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:41.482255   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:41.668516   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:41.739575   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:41.982689   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:42.167507   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:42.238728   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:42.483020   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:42.667672   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:42.704475   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:42.741067   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:42.982398   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:43.167168   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:43.239945   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:43.482597   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:43.667566   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:43.739008   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:43.981859   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:44.167023   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:44.241156   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:44.482977   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:44.666840   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:44.738131   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:44.985374   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:45.168537   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:45.202980   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:45.250978   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:45.490567   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:45.667489   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:45.738725   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:46.014220   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:46.172093   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:46.241057   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:46.482561   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:46.672646   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:46.741727   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:46.989517   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:47.168618   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:47.238989   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:47.482414   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:47.668761   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:47.702491   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:47.738333   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:47.982562   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:48.167441   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:48.238927   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:48.488203   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:48.667245   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:48.739314   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:48.983025   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:49.167603   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:49.239128   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:49.483262   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:49.673758   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:49.743401   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:49.982998   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:50.166746   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:50.202419   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:50.240310   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:50.483054   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:50.666964   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:50.739262   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:50.982170   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:51.167476   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:51.238946   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:51.483437   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:51.667279   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:51.739216   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:51.982683   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:52.167176   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:52.239263   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:52.482483   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:52.676679   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:52.704153   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:53.166493   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:53.174874   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:53.174900   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:53.241236   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:53.482232   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:53.666824   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:53.738907   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:53.984323   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:54.167083   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:54.243029   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:54.482469   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:54.667002   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:54.739115   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:54.982590   12352 kapi.go:107] duration metric: took 1m13.504781188s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0520 10:23:55.166917   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:55.201059   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:55.241449   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:55.666758   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:55.746161   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:56.168208   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:56.239376   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:56.667304   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:56.738492   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:57.167365   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:57.203555   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:57.239673   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:57.667190   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:57.738137   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:58.167593   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:58.244116   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:58.675261   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:58.744743   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:59.168893   12352 kapi.go:107] duration metric: took 1m14.505690445s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0520 10:23:59.170866   12352 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-807933 cluster.
	I0520 10:23:59.172220   12352 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0520 10:23:59.173661   12352 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0520 10:23:59.204413   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:59.244414   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:59.738872   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:00.238693   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:00.738149   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:01.332535   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:01.335781   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:01.740992   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:02.238063   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:02.741285   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:03.241486   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:03.705088   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:03.739004   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:04.239766   12352 kapi.go:107] duration metric: took 1m21.506417837s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0520 10:24:04.241568   12352 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, yakd, helm-tiller, metrics-server, inspektor-gadget, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0520 10:24:04.242887   12352 addons.go:505] duration metric: took 1m31.657343198s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns yakd helm-tiller metrics-server inspektor-gadget default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0520 10:24:06.203711   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:08.703279   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:11.201791   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:13.202301   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:15.701153   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:17.701468   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:18.702834   12352 pod_ready.go:92] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"True"
	I0520 10:24:18.702862   12352 pod_ready.go:81] duration metric: took 1m36.007261506s for pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace to be "Ready" ...
	I0520 10:24:18.702876   12352 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2bwlq" in "kube-system" namespace to be "Ready" ...
	I0520 10:24:18.716651   12352 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-2bwlq" in "kube-system" namespace has status "Ready":"True"
	I0520 10:24:18.716674   12352 pod_ready.go:81] duration metric: took 13.790589ms for pod "nvidia-device-plugin-daemonset-2bwlq" in "kube-system" namespace to be "Ready" ...
	I0520 10:24:18.716699   12352 pod_ready.go:38] duration metric: took 1m37.230821368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:24:18.716721   12352 api_server.go:52] waiting for apiserver process to appear ...
	I0520 10:24:18.716756   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 10:24:18.716815   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 10:24:18.767995   12352 cri.go:89] found id: "f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75"
	I0520 10:24:18.768022   12352 cri.go:89] found id: ""
	I0520 10:24:18.768039   12352 logs.go:276] 1 containers: [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75]
	I0520 10:24:18.768095   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:18.772571   12352 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 10:24:18.772642   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 10:24:18.813105   12352 cri.go:89] found id: "0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27"
	I0520 10:24:18.813128   12352 cri.go:89] found id: ""
	I0520 10:24:18.813135   12352 logs.go:276] 1 containers: [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27]
	I0520 10:24:18.813177   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:18.817579   12352 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 10:24:18.817637   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 10:24:18.880226   12352 cri.go:89] found id: "0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2"
	I0520 10:24:18.880253   12352 cri.go:89] found id: ""
	I0520 10:24:18.880262   12352 logs.go:276] 1 containers: [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2]
	I0520 10:24:18.880314   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:18.888993   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 10:24:18.889078   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 10:24:18.932538   12352 cri.go:89] found id: "78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e"
	I0520 10:24:18.932557   12352 cri.go:89] found id: ""
	I0520 10:24:18.932564   12352 logs.go:276] 1 containers: [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e]
	I0520 10:24:18.932613   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:18.937063   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 10:24:18.937133   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 10:24:18.989617   12352 cri.go:89] found id: "dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad"
	I0520 10:24:18.989643   12352 cri.go:89] found id: ""
	I0520 10:24:18.989652   12352 logs.go:276] 1 containers: [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad]
	I0520 10:24:18.989710   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:18.995025   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 10:24:18.995106   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 10:24:19.037763   12352 cri.go:89] found id: "01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136"
	I0520 10:24:19.037784   12352 cri.go:89] found id: ""
	I0520 10:24:19.037793   12352 logs.go:276] 1 containers: [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136]
	I0520 10:24:19.037851   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:19.042269   12352 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 10:24:19.042327   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 10:24:19.102165   12352 cri.go:89] found id: ""
	I0520 10:24:19.102188   12352 logs.go:276] 0 containers: []
	W0520 10:24:19.102197   12352 logs.go:278] No container was found matching "kindnet"
	I0520 10:24:19.102205   12352 logs.go:123] Gathering logs for CRI-O ...
	I0520 10:24:19.102215   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 10:24:20.072292   12352 logs.go:123] Gathering logs for container status ...
	I0520 10:24:20.072340   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 10:24:20.125541   12352 logs.go:123] Gathering logs for describe nodes ...
	I0520 10:24:20.125575   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 10:24:20.265478   12352 logs.go:123] Gathering logs for kube-apiserver [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75] ...
	I0520 10:24:20.265504   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75"
	I0520 10:24:20.319540   12352 logs.go:123] Gathering logs for coredns [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2] ...
	I0520 10:24:20.319575   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2"
	I0520 10:24:20.362397   12352 logs.go:123] Gathering logs for kube-scheduler [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e] ...
	I0520 10:24:20.362427   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e"
	I0520 10:24:20.412157   12352 logs.go:123] Gathering logs for kube-proxy [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad] ...
	I0520 10:24:20.412193   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad"
	I0520 10:24:20.448477   12352 logs.go:123] Gathering logs for kube-controller-manager [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136] ...
	I0520 10:24:20.448504   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136"
	I0520 10:24:20.512304   12352 logs.go:123] Gathering logs for kubelet ...
	I0520 10:24:20.512343   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 10:24:20.575439   12352 logs.go:138] Found kubelet problem: May 20 10:22:44 addons-807933 kubelet[1277]: W0520 10:22:44.644175    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	W0520 10:24:20.575595   12352 logs.go:138] Found kubelet problem: May 20 10:22:44 addons-807933 kubelet[1277]: E0520 10:22:44.644207    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	I0520 10:24:20.595325   12352 logs.go:123] Gathering logs for dmesg ...
	I0520 10:24:20.595345   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 10:24:20.611572   12352 logs.go:123] Gathering logs for etcd [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27] ...
	I0520 10:24:20.611599   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27"
	I0520 10:24:20.667129   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:20.667151   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 10:24:20.667198   12352 out.go:239] X Problems detected in kubelet:
	W0520 10:24:20.667209   12352 out.go:239]   May 20 10:22:44 addons-807933 kubelet[1277]: W0520 10:22:44.644175    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	W0520 10:24:20.667218   12352 out.go:239]   May 20 10:22:44 addons-807933 kubelet[1277]: E0520 10:22:44.644207    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	I0520 10:24:20.667226   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:20.667233   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:24:30.668372   12352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:24:30.688716   12352 api_server.go:72] duration metric: took 1m58.103188006s to wait for apiserver process to appear ...
	I0520 10:24:30.688744   12352 api_server.go:88] waiting for apiserver healthz status ...
	I0520 10:24:30.688776   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 10:24:30.688834   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 10:24:30.738498   12352 cri.go:89] found id: "f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75"
	I0520 10:24:30.738524   12352 cri.go:89] found id: ""
	I0520 10:24:30.738533   12352 logs.go:276] 1 containers: [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75]
	I0520 10:24:30.738588   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:30.743295   12352 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 10:24:30.743355   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 10:24:30.785220   12352 cri.go:89] found id: "0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27"
	I0520 10:24:30.785244   12352 cri.go:89] found id: ""
	I0520 10:24:30.785253   12352 logs.go:276] 1 containers: [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27]
	I0520 10:24:30.785310   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:30.789718   12352 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 10:24:30.789771   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 10:24:30.830436   12352 cri.go:89] found id: "0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2"
	I0520 10:24:30.830460   12352 cri.go:89] found id: ""
	I0520 10:24:30.830469   12352 logs.go:276] 1 containers: [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2]
	I0520 10:24:30.830528   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:30.834815   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 10:24:30.834879   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 10:24:30.876193   12352 cri.go:89] found id: "78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e"
	I0520 10:24:30.876212   12352 cri.go:89] found id: ""
	I0520 10:24:30.876219   12352 logs.go:276] 1 containers: [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e]
	I0520 10:24:30.876269   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:30.880999   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 10:24:30.881064   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 10:24:30.923098   12352 cri.go:89] found id: "dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad"
	I0520 10:24:30.923126   12352 cri.go:89] found id: ""
	I0520 10:24:30.923135   12352 logs.go:276] 1 containers: [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad]
	I0520 10:24:30.923181   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:30.927467   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 10:24:30.927517   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 10:24:30.972749   12352 cri.go:89] found id: "01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136"
	I0520 10:24:30.972763   12352 cri.go:89] found id: ""
	I0520 10:24:30.972770   12352 logs.go:276] 1 containers: [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136]
	I0520 10:24:30.972812   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:30.977493   12352 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 10:24:30.977547   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 10:24:31.020852   12352 cri.go:89] found id: ""
	I0520 10:24:31.020875   12352 logs.go:276] 0 containers: []
	W0520 10:24:31.020884   12352 logs.go:278] No container was found matching "kindnet"
	I0520 10:24:31.020893   12352 logs.go:123] Gathering logs for kube-apiserver [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75] ...
	I0520 10:24:31.020909   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75"
	I0520 10:24:31.075622   12352 logs.go:123] Gathering logs for etcd [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27] ...
	I0520 10:24:31.075650   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27"
	I0520 10:24:31.146942   12352 logs.go:123] Gathering logs for container status ...
	I0520 10:24:31.146975   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 10:24:31.208712   12352 logs.go:123] Gathering logs for kube-proxy [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad] ...
	I0520 10:24:31.208740   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad"
	I0520 10:24:31.247891   12352 logs.go:123] Gathering logs for kube-controller-manager [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136] ...
	I0520 10:24:31.247920   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136"
	I0520 10:24:31.321221   12352 logs.go:123] Gathering logs for CRI-O ...
	I0520 10:24:31.321253   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 10:24:32.208184   12352 logs.go:123] Gathering logs for kubelet ...
	I0520 10:24:32.208223   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 10:24:32.277155   12352 logs.go:138] Found kubelet problem: May 20 10:22:44 addons-807933 kubelet[1277]: W0520 10:22:44.644175    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	W0520 10:24:32.277312   12352 logs.go:138] Found kubelet problem: May 20 10:22:44 addons-807933 kubelet[1277]: E0520 10:22:44.644207    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	I0520 10:24:32.297331   12352 logs.go:123] Gathering logs for dmesg ...
	I0520 10:24:32.297353   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 10:24:32.312440   12352 logs.go:123] Gathering logs for describe nodes ...
	I0520 10:24:32.312466   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 10:24:32.440156   12352 logs.go:123] Gathering logs for coredns [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2] ...
	I0520 10:24:32.440182   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2"
	I0520 10:24:32.482207   12352 logs.go:123] Gathering logs for kube-scheduler [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e] ...
	I0520 10:24:32.482238   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e"
	I0520 10:24:32.528151   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:32.528178   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 10:24:32.528226   12352 out.go:239] X Problems detected in kubelet:
	W0520 10:24:32.528236   12352 out.go:239]   May 20 10:22:44 addons-807933 kubelet[1277]: W0520 10:22:44.644175    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	W0520 10:24:32.528244   12352 out.go:239]   May 20 10:22:44 addons-807933 kubelet[1277]: E0520 10:22:44.644207    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	I0520 10:24:32.528251   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:32.528258   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:24:42.529965   12352 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I0520 10:24:42.534312   12352 api_server.go:279] https://192.168.39.86:8443/healthz returned 200:
	ok
	I0520 10:24:42.535368   12352 api_server.go:141] control plane version: v1.30.1
	I0520 10:24:42.535389   12352 api_server.go:131] duration metric: took 11.846639044s to wait for apiserver health ...
	I0520 10:24:42.535396   12352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 10:24:42.535414   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 10:24:42.535460   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 10:24:42.585491   12352 cri.go:89] found id: "f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75"
	I0520 10:24:42.585512   12352 cri.go:89] found id: ""
	I0520 10:24:42.585521   12352 logs.go:276] 1 containers: [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75]
	I0520 10:24:42.585578   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:42.589902   12352 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 10:24:42.589959   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 10:24:42.627921   12352 cri.go:89] found id: "0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27"
	I0520 10:24:42.627939   12352 cri.go:89] found id: ""
	I0520 10:24:42.627945   12352 logs.go:276] 1 containers: [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27]
	I0520 10:24:42.627987   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:42.632346   12352 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 10:24:42.632401   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 10:24:42.674788   12352 cri.go:89] found id: "0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2"
	I0520 10:24:42.674806   12352 cri.go:89] found id: ""
	I0520 10:24:42.674819   12352 logs.go:276] 1 containers: [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2]
	I0520 10:24:42.674865   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:42.680857   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 10:24:42.680921   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 10:24:42.719199   12352 cri.go:89] found id: "78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e"
	I0520 10:24:42.719218   12352 cri.go:89] found id: ""
	I0520 10:24:42.719225   12352 logs.go:276] 1 containers: [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e]
	I0520 10:24:42.719269   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:42.723653   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 10:24:42.723708   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 10:24:42.762888   12352 cri.go:89] found id: "dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad"
	I0520 10:24:42.762913   12352 cri.go:89] found id: ""
	I0520 10:24:42.762921   12352 logs.go:276] 1 containers: [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad]
	I0520 10:24:42.762975   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:42.767276   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 10:24:42.767324   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 10:24:42.810973   12352 cri.go:89] found id: "01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136"
	I0520 10:24:42.810993   12352 cri.go:89] found id: ""
	I0520 10:24:42.811005   12352 logs.go:276] 1 containers: [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136]
	I0520 10:24:42.811061   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:42.815644   12352 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 10:24:42.815705   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 10:24:42.855406   12352 cri.go:89] found id: ""
	I0520 10:24:42.855443   12352 logs.go:276] 0 containers: []
	W0520 10:24:42.855453   12352 logs.go:278] No container was found matching "kindnet"
	I0520 10:24:42.855465   12352 logs.go:123] Gathering logs for describe nodes ...
	I0520 10:24:42.855480   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 10:24:42.975833   12352 logs.go:123] Gathering logs for coredns [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2] ...
	I0520 10:24:42.975867   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2"
	I0520 10:24:43.016882   12352 logs.go:123] Gathering logs for dmesg ...
	I0520 10:24:43.016911   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 10:24:43.032116   12352 logs.go:123] Gathering logs for kube-apiserver [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75] ...
	I0520 10:24:43.032142   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75"
	I0520 10:24:43.081740   12352 logs.go:123] Gathering logs for etcd [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27] ...
	I0520 10:24:43.081773   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27"
	I0520 10:24:43.136339   12352 logs.go:123] Gathering logs for kube-scheduler [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e] ...
	I0520 10:24:43.136373   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e"
	I0520 10:24:43.187372   12352 logs.go:123] Gathering logs for kube-proxy [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad] ...
	I0520 10:24:43.187400   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad"
	I0520 10:24:43.226989   12352 logs.go:123] Gathering logs for kube-controller-manager [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136] ...
	I0520 10:24:43.227014   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136"
	I0520 10:24:43.285466   12352 logs.go:123] Gathering logs for CRI-O ...
	I0520 10:24:43.285495   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 10:24:44.058186   12352 logs.go:123] Gathering logs for container status ...
	I0520 10:24:44.058230   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 10:24:44.116999   12352 logs.go:123] Gathering logs for kubelet ...
	I0520 10:24:44.117034   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 10:24:44.180167   12352 logs.go:138] Found kubelet problem: May 20 10:22:44 addons-807933 kubelet[1277]: W0520 10:22:44.644175    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	W0520 10:24:44.180350   12352 logs.go:138] Found kubelet problem: May 20 10:22:44 addons-807933 kubelet[1277]: E0520 10:22:44.644207    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	I0520 10:24:44.201294   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:44.201318   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 10:24:44.201377   12352 out.go:239] X Problems detected in kubelet:
	W0520 10:24:44.201391   12352 out.go:239]   May 20 10:22:44 addons-807933 kubelet[1277]: W0520 10:22:44.644175    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	W0520 10:24:44.201400   12352 out.go:239]   May 20 10:22:44 addons-807933 kubelet[1277]: E0520 10:22:44.644207    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	I0520 10:24:44.201413   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:44.201424   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:24:54.211778   12352 system_pods.go:59] 18 kube-system pods found
	I0520 10:24:54.211817   12352 system_pods.go:61] "coredns-7db6d8ff4d-mq6st" [daaf804f-262b-4746-bc6e-d36538c125eb] Running
	I0520 10:24:54.211823   12352 system_pods.go:61] "csi-hostpath-attacher-0" [b4ccbec5-2ba2-43dc-930a-e427da3ffae1] Running
	I0520 10:24:54.211828   12352 system_pods.go:61] "csi-hostpath-resizer-0" [ecec8bb0-5ce1-4b2b-bfff-3193a56d99fd] Running
	I0520 10:24:54.211832   12352 system_pods.go:61] "csi-hostpathplugin-mckcj" [281dad2c-e7c6-4f48-9908-f6d1749c2207] Running
	I0520 10:24:54.211836   12352 system_pods.go:61] "etcd-addons-807933" [2d966690-4f95-4ba8-98a3-68fa25fb87d5] Running
	I0520 10:24:54.211842   12352 system_pods.go:61] "kube-apiserver-addons-807933" [9eb73c06-6275-419c-a0d8-7a8048b07d89] Running
	I0520 10:24:54.211846   12352 system_pods.go:61] "kube-controller-manager-addons-807933" [b51f0eab-78d0-4d6f-b57a-3128ac4e5e32] Running
	I0520 10:24:54.211851   12352 system_pods.go:61] "kube-ingress-dns-minikube" [29cc0a3f-eade-48c7-89c4-00bb5444237c] Running
	I0520 10:24:54.211855   12352 system_pods.go:61] "kube-proxy-twglf" [6854df94-6335-4952-ba2e-a0d93ed0be5c] Running
	I0520 10:24:54.211859   12352 system_pods.go:61] "kube-scheduler-addons-807933" [35a08f65-737a-49e3-ac2d-ab8ed326766f] Running
	I0520 10:24:54.211864   12352 system_pods.go:61] "metrics-server-c59844bb4-8j5ls" [bcafd302-7969-429c-b306-290348b05d85] Running
	I0520 10:24:54.211868   12352 system_pods.go:61] "nvidia-device-plugin-daemonset-2bwlq" [2f6875de-2696-4e7c-beb9-27efa228893f] Running
	I0520 10:24:54.211873   12352 system_pods.go:61] "registry-26dx9" [d90bb428-cf8d-481e-8467-e72e21658dcd] Running
	I0520 10:24:54.211878   12352 system_pods.go:61] "registry-proxy-vw56p" [d92f24a6-e6e3-4551-991f-30a570338e6a] Running
	I0520 10:24:54.211883   12352 system_pods.go:61] "snapshot-controller-745499f584-p4wsh" [93319461-fdba-42b8-956b-f1ac8250d1b2] Running
	I0520 10:24:54.211889   12352 system_pods.go:61] "snapshot-controller-745499f584-xlkcl" [5c4efb7e-3a0b-41f9-8e68-526168622579] Running
	I0520 10:24:54.211896   12352 system_pods.go:61] "storage-provisioner" [6742392f-cc63-40e3-8735-d483096cc9d1] Running
	I0520 10:24:54.211902   12352 system_pods.go:61] "tiller-deploy-6677d64bcd-lmqq4" [5327dee8-aaa4-44c5-9cf0-a904ab8b1b4a] Running
	I0520 10:24:54.211910   12352 system_pods.go:74] duration metric: took 11.676508836s to wait for pod list to return data ...
	I0520 10:24:54.211922   12352 default_sa.go:34] waiting for default service account to be created ...
	I0520 10:24:54.214088   12352 default_sa.go:45] found service account: "default"
	I0520 10:24:54.214121   12352 default_sa.go:55] duration metric: took 2.188673ms for default service account to be created ...
	I0520 10:24:54.214130   12352 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 10:24:54.221870   12352 system_pods.go:86] 18 kube-system pods found
	I0520 10:24:54.221897   12352 system_pods.go:89] "coredns-7db6d8ff4d-mq6st" [daaf804f-262b-4746-bc6e-d36538c125eb] Running
	I0520 10:24:54.221905   12352 system_pods.go:89] "csi-hostpath-attacher-0" [b4ccbec5-2ba2-43dc-930a-e427da3ffae1] Running
	I0520 10:24:54.221911   12352 system_pods.go:89] "csi-hostpath-resizer-0" [ecec8bb0-5ce1-4b2b-bfff-3193a56d99fd] Running
	I0520 10:24:54.221917   12352 system_pods.go:89] "csi-hostpathplugin-mckcj" [281dad2c-e7c6-4f48-9908-f6d1749c2207] Running
	I0520 10:24:54.221923   12352 system_pods.go:89] "etcd-addons-807933" [2d966690-4f95-4ba8-98a3-68fa25fb87d5] Running
	I0520 10:24:54.221929   12352 system_pods.go:89] "kube-apiserver-addons-807933" [9eb73c06-6275-419c-a0d8-7a8048b07d89] Running
	I0520 10:24:54.221935   12352 system_pods.go:89] "kube-controller-manager-addons-807933" [b51f0eab-78d0-4d6f-b57a-3128ac4e5e32] Running
	I0520 10:24:54.221942   12352 system_pods.go:89] "kube-ingress-dns-minikube" [29cc0a3f-eade-48c7-89c4-00bb5444237c] Running
	I0520 10:24:54.221951   12352 system_pods.go:89] "kube-proxy-twglf" [6854df94-6335-4952-ba2e-a0d93ed0be5c] Running
	I0520 10:24:54.221958   12352 system_pods.go:89] "kube-scheduler-addons-807933" [35a08f65-737a-49e3-ac2d-ab8ed326766f] Running
	I0520 10:24:54.221968   12352 system_pods.go:89] "metrics-server-c59844bb4-8j5ls" [bcafd302-7969-429c-b306-290348b05d85] Running
	I0520 10:24:54.221974   12352 system_pods.go:89] "nvidia-device-plugin-daemonset-2bwlq" [2f6875de-2696-4e7c-beb9-27efa228893f] Running
	I0520 10:24:54.221981   12352 system_pods.go:89] "registry-26dx9" [d90bb428-cf8d-481e-8467-e72e21658dcd] Running
	I0520 10:24:54.221989   12352 system_pods.go:89] "registry-proxy-vw56p" [d92f24a6-e6e3-4551-991f-30a570338e6a] Running
	I0520 10:24:54.221996   12352 system_pods.go:89] "snapshot-controller-745499f584-p4wsh" [93319461-fdba-42b8-956b-f1ac8250d1b2] Running
	I0520 10:24:54.222006   12352 system_pods.go:89] "snapshot-controller-745499f584-xlkcl" [5c4efb7e-3a0b-41f9-8e68-526168622579] Running
	I0520 10:24:54.222013   12352 system_pods.go:89] "storage-provisioner" [6742392f-cc63-40e3-8735-d483096cc9d1] Running
	I0520 10:24:54.222022   12352 system_pods.go:89] "tiller-deploy-6677d64bcd-lmqq4" [5327dee8-aaa4-44c5-9cf0-a904ab8b1b4a] Running
	I0520 10:24:54.222030   12352 system_pods.go:126] duration metric: took 7.892886ms to wait for k8s-apps to be running ...
	I0520 10:24:54.222039   12352 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 10:24:54.222095   12352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:24:54.245886   12352 system_svc.go:56] duration metric: took 23.840706ms WaitForService to wait for kubelet
	I0520 10:24:54.245919   12352 kubeadm.go:576] duration metric: took 2m21.660395751s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:24:54.245944   12352 node_conditions.go:102] verifying NodePressure condition ...
	I0520 10:24:54.249069   12352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 10:24:54.249091   12352 node_conditions.go:123] node cpu capacity is 2
	I0520 10:24:54.249104   12352 node_conditions.go:105] duration metric: took 3.154722ms to run NodePressure ...
	I0520 10:24:54.249115   12352 start.go:240] waiting for startup goroutines ...
	I0520 10:24:54.249124   12352 start.go:245] waiting for cluster config update ...
	I0520 10:24:54.249138   12352 start.go:254] writing updated cluster config ...
	I0520 10:24:54.249400   12352 ssh_runner.go:195] Run: rm -f paused
	I0520 10:24:54.299257   12352 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 10:24:54.301323   12352 out.go:177] * Done! kubectl is now configured to use "addons-807933" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.914323417Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:95bf199e010eb3aaef286a369bfab11ebacdf0fa8aafe96c6f2bdd660c9a4c88,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1716200559634952084,StartedAt:1716200560184405675,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6742392f-cc63-40e3-8735-d483096cc9d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7446f7ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6742392f-cc63-40e3-8735-d483096cc9d1/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6742392f-cc63-40e3-8735-d483096cc9d1/containers/storage-provisioner/fab5d45f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/6742392f-cc63-40e3-8735-d483096cc9d1/volumes/kubernetes.io~projected/kube-api-access-42lx4,Readonly:true,SelinuxRelabel:false,Pr
opagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_6742392f-cc63-40e3-8735-d483096cc9d1/storage-provisioner/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7a1657f1-688c-4349-b372-12dbdbe4b6fa name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.914740352Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2,Verbose:false,}" file="otel-collector/interceptors.go:62" id=e2984df1-2cbf-43a3-a436-617603509a2d name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.914872866Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1716200555984935781,StartedAt:1716200556321914836,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mq6st,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daaf804f-262b-4746-bc6e-d36538c125eb,},Annotations:map[string]string{io.kubernetes.container.hash: e4a4e5da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/daaf804f-262b-4746-bc6e-d36538c125eb/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/daaf804f-262b-4746-bc6e-d36538c125eb/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/daaf804f-262b-4746-bc6e-d36538c125eb/containers/coredns/61adb720,Readonly:false,SelinuxRelabel:false,Propagation:PRO
PAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/daaf804f-262b-4746-bc6e-d36538c125eb/volumes/kubernetes.io~projected/kube-api-access-8tlpr,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-mq6st_daaf804f-262b-4746-bc6e-d36538c125eb/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:982,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e2984df1-2cbf-43a3-a436-617603509a2d name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.915259198Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad,Verbose:false,}" file="otel-collector/interceptors.go:62" id=272006c7-7625-49c6-b23a-ea5bd8ed98be name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.915436662Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1716200554325483419,StartedAt:1716200554616961565,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twglf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6854df94-6335-4952-ba2e-a0d93ed0be5c,},Annotations:map[string]string{io.kubernetes.container.hash: 5af81c42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6854df94-6335-4952-ba2e-a0d93ed0be5c/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6854df94-6335-4952-ba2e-a0d93ed0be5c/containers/kube-proxy/63583639,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/
kubelet/pods/6854df94-6335-4952-ba2e-a0d93ed0be5c/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/6854df94-6335-4952-ba2e-a0d93ed0be5c/volumes/kubernetes.io~projected/kube-api-access-jz9ql,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-twglf_6854df94-6335-4952-ba2e-a0d93ed0be5c/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-colle
ctor/interceptors.go:74" id=272006c7-7625-49c6-b23a-ea5bd8ed98be name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.915849091Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=0fa9f20c-ba8e-48e8-8a09-12131aab841a name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.916064683Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1716200533784973274,StartedAt:1716200533907117209,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a66e920b1bbc9702b2e72fd1d02091,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d2a66e920b1bbc9702b2e72fd1d02091/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d2a66e920b1bbc9702b2e72fd1d02091/containers/kube-scheduler/7768e7fe,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-addons-807933_d2a66e920b1bbc9702b2e72fd1d02091/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,
CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0fa9f20c-ba8e-48e8-8a09-12131aab841a name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.917260383Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136,Verbose:false,}" file="otel-collector/interceptors.go:62" id=691c36f0-5b01-46d5-8f18-f698f78b5adf name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.917486617Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1716200533754497776,StartedAt:1716200533828132171,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6de6c8b447877c49cad2d9c3c6e55a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/be6de6c8b447877c49cad2d9c3c6e55a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/be6de6c8b447877c49cad2d9c3c6e55a/containers/kube-controller-manager/055092fa,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMapp
ings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-addons-807933_be6de6c8b447877c49cad2d9c3c6e55a/kube-controller-manager/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,Hugepag
eLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=691c36f0-5b01-46d5-8f18-f698f78b5adf name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.917899846Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27,Verbose:false,}" file="otel-collector/interceptors.go:62" id=93858151-0c34-4bb8-9b87-0a1667668816 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.918163901Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1716200533718702449,StartedAt:1716200533787164533,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170b1dec80d4c835528c7c40c5df9bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4e35b8b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/170b1dec80d4c835528c7c40c5df9bd9/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/170b1dec80d4c835528c7c40c5df9bd9/containers/etcd/cbd24e48,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-addons-8
07933_170b1dec80d4c835528c7c40c5df9bd9/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=93858151-0c34-4bb8-9b87-0a1667668816 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.919186441Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75,Verbose:false,}" file="otel-collector/interceptors.go:62" id=e4034706-b579-41b6-8911-b3a5262ec5d0 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.919532789Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1716200533622403292,StartedAt:1716200533709001849,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b9f91033eaca22c7aded584f87209e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e34cd90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a2b9f91033eaca22c7aded584f87209e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a2b9f91033eaca22c7aded584f87209e/containers/kube-apiserver/d03352ee,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/
var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-addons-807933_a2b9f91033eaca22c7aded584f87209e/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e4034706-b579-41b6-8911-b3a5262ec5d0 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.952370646Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=439508a4-91e5-49b8-9686-4072fda8c941 name=/runtime.v1.RuntimeService/Version
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.952587216Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=439508a4-91e5-49b8-9686-4072fda8c941 name=/runtime.v1.RuntimeService/Version
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.954762630Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f6d7a36-4013-40fa-a700-f89220051ce7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.955915844Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716200868955890775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584529,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f6d7a36-4013-40fa-a700-f89220051ce7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.956641901Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a17a322-4281-490e-beff-3d8d069d9267 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.956755330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a17a322-4281-490e-beff-3d8d069d9267 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.957066844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25edf660d8e0349eab2c24d609c085b354a740db1500ee0ee30ebcfd0603f144,PodSandboxId:a2e7c7822fa76c3b9c02274ed92d1008a5dad3b91aa05077e8f901862ff71655,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716200862405014463,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-85gqz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d1a9475-8be4-4408-be74-84d6dd2f8b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b360723,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f04d6299b3c59db2d1b2009b97a2e6f596a6f70d700f8dd92710a6987519b8,PodSandboxId:0df6c01c2dac8f3e2de8c44383cdded0ff21aadf59245ad39dcf1b0f603e6266,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716200721959483961,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c7de491-1690-49fe-8238-f9e1b0120d62,},Annotations:map[string]string{io.kubern
etes.container.hash: f69fdaa6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3abcb59b60ea8124e50095e6b5505433e8979856645b96d9f626645bfca5e875,PodSandboxId:6ac45ca42c7f7717afb277078122ed11bc2f5c7da093a64844da94689ffc1dd4,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716200706373944840,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tmsmw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 3a27a456-c5bb-455f-9228-1176f6695168,},Annotations:map[string]string{io.kubernetes.container.hash: 6618428d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4ba5e754ff4a58ca20ee1cba03cfbe2b2e66f80284195ad40a28e381863ff3,PodSandboxId:a14f513ff363ebf9764340cd7b8b9244c4a4c27365ef5fffdcd61e53e9e3f744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716200638625275107,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-6cf67,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a0535f01-6cab-4cf2-8e4a-35bf2f764e62,},Annotations:map[string]string{io.kubernetes.container.hash: 8bce9a32,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f7b3afbcb9757686e16fef0788c036f791930adf5b30122c2f0909f03f5054,PodSandboxId:e879510baa51d9902df5e32fd68d26269b43b839856ad208876eb95b061a81fb,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716200624071846961,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j5fd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 46c0e0f5-9fc4-4682-82ee-d36d4adaf012,},Annotations:map[string]string{io.kubernetes.container.hash: a296311a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ad985f33aab2992e37f69d0d95974ea7139517a2e4b5745178e777ab74da2,PodSandboxId:f6ddf1fd09945a593e516c7c892a53def3b45f470c76364bbb87054d9fd61c36,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:
1716200622695615556,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jqk8n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b7d92727-8edb-4d89-b56b-6981e64d203c,},Annotations:map[string]string{io.kubernetes.container.hash: bb129cd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cd4b11836ec1caece9bd19f5fb75a2f9a95cec04fe5c83859e36e621c5a90,PodSandboxId:b435b439acd4093a2e37a7c79496a6073ac5bcec0bffe76f0250a600a6b2804e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3f
cf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1716200615076945512,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-l5zbg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f12e41f5-3788-4d37-a744-248e4a658254,},Annotations:map[string]string{io.kubernetes.container.hash: ea447b4c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e96316e67aa9833fd4520bbee6643a19d991aa16517c65fe40dffcee51f274,PodSandboxId:47f698f1ebf8e0145c7803f37e5754d179cbe8cac157acd0b94a7ecb4839a320,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d837136
1afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716200605002947565,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-pm765,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 485d8c24-bc61-48a4-a4c0-b9b52de3ea25,},Annotations:map[string]string{io.kubernetes.container.hash: 6cff23d4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2ad193ac9f00d256ecd39b436bf964eb0d37b7b864145a28031abad002faf5,PodSandboxId:c90dcfa0d09b6e15a6ad2b46e327d0e8ee7b34aff08a9ee93fb29a9ffd837bbc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc5965
53c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716200588326514008,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8j5ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcafd302-7969-429c-b306-290348b05d85,},Annotations:map[string]string{io.kubernetes.container.hash: b0b217f4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bf199e010eb3aaef286a369bfab11ebacdf0fa8aafe96c6f2bdd660c9a4c88,PodSandboxId:39879541f96cda7e362099633dad874666cee2cb433f49d1d83399de032493f5,Metadata:&ContainerMetadata{
Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716200559437008065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6742392f-cc63-40e3-8735-d483096cc9d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7446f7ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2,PodSandboxId:ddca6ca50d59c0f8d15a3ce84a63738e80b0e04a25fbd14d340f7e86c3809a51,Metadata:&ContainerMetadata{Name:coredns
,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716200555854745557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mq6st,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daaf804f-262b-4746-bc6e-d36538c125eb,},Annotations:map[string]string{io.kubernetes.container.hash: e4a4e5da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad,PodSandboxId:204ea35171763bae1ca9eb27e81e70fe57c5f124aea3405b582f493052272a8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716200553898429247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twglf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6854df94-6335-4952-ba2e-a0d93ed0be5c,},Annotations:map[string]string{io.kubernetes.container.hash: 5af81c42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b20ded1733cce3ca061
4825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e,PodSandboxId:46199d25b6624d514576a4aeb41f5d81193e5fcf1cf811a7e33b36aaafa683ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716200533662045003,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a66e920b1bbc9702b2e72fd1d02091,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bce967d30acaffabbda8c913f4d582e5ff41
585bfba5807396323c8cad1136,PodSandboxId:97fecee9f0945fa597fccf12a6c63fae54f25bd68083e8e9d00952d9289fafed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716200533637665567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6de6c8b447877c49cad2d9c3c6e55a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc118a7deeb385af7c8c02997ad
3c01f4ac7604711393a111cce06f09c0dc27,PodSandboxId:328c8e8349f476119a634d0ecfbd9b7adf95ad4840a02ad6a34fa596e92d84b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716200533601660269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170b1dec80d4c835528c7c40c5df9bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4e35b8b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75,PodSandbox
Id:42ec4aa1d5e4867e31253ddd10c408c25883d9fc8bdb92decba052d4302dc2b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716200533552390047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b9f91033eaca22c7aded584f87209e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e34cd90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a17a322-4281-490e-beff-3d8d069d9267 name=/runtime.v1.Run
timeService/ListContainers
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.969196817Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ac7d8933-a833-4fbb-ac38-4289ecec9a5e name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.969789019Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a2e7c7822fa76c3b9c02274ed92d1008a5dad3b91aa05077e8f901862ff71655,Metadata:&PodSandboxMetadata{Name:hello-world-app-86c47465fc-85gqz,Uid:5d1a9475-8be4-4408-be74-84d6dd2f8b8c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716200859123352070,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-86c47465fc-85gqz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d1a9475-8be4-4408-be74-84d6dd2f8b8c,pod-template-hash: 86c47465fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:27:38.803366158Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0df6c01c2dac8f3e2de8c44383cdded0ff21aadf59245ad39dcf1b0f603e6266,Metadata:&PodSandboxMetadata{Name:nginx,Uid:1c7de491-1690-49fe-8238-f9e1b0120d62,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1716200717070131508,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c7de491-1690-49fe-8238-f9e1b0120d62,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:25:15.859505037Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6ac45ca42c7f7717afb277078122ed11bc2f5c7da093a64844da94689ffc1dd4,Metadata:&PodSandboxMetadata{Name:headlamp-68456f997b-tmsmw,Uid:3a27a456-c5bb-455f-9228-1176f6695168,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716200701313831997,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-68456f997b-tmsmw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 3a27a456-c5bb-455f-9228-1176f6695168,pod-template-hash: 68456f997b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
05-20T10:25:01.004213989Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a14f513ff363ebf9764340cd7b8b9244c4a4c27365ef5fffdcd61e53e9e3f744,Metadata:&PodSandboxMetadata{Name:gcp-auth-5db96cd9b4-6cf67,Uid:a0535f01-6cab-4cf2-8e4a-35bf2f764e62,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716200629766758402,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-6cf67,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a0535f01-6cab-4cf2-8e4a-35bf2f764e62,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 5db96cd9b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:22:44.598076118Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c8582ad36dfee27d34785b8387e6e9c3bd946a7532574d987cccf273a5a83c9,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-768f948f8f-86gwj,Uid:30fbb3e4-6352-4d15-99dd-fedde65cf5c0,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NO
TREADY,CreatedAt:1716200625476059498,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-768f948f8f-86gwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 30fbb3e4-6352-4d15-99dd-fedde65cf5c0,pod-template-hash: 768f948f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:22:41.260057759Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e879510baa51d9902df5e32fd68d26269b43b839856ad208876eb95b061a81fb,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-j5fd8,Uid:46c0e0f5-9fc4-4682-82ee-d36d4adaf012,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716200563336762200,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.k
ubernetes.io/controller-uid: 42e8cbe0-0804-4a70-8185-35a769bf1952,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 42e8cbe0-0804-4a70-8185-35a769bf1952,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-j5fd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 46c0e0f5-9fc4-4682-82ee-d36d4adaf012,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:22:41.427365155Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6ddf1fd09945a593e516c7c892a53def3b45f470c76364bbb87054d9fd61c36,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-jqk8n,Uid:b7d92727-8edb-4d89-b56b-6981e64d203c,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716200563238648121,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-u
id: fa1b9cbb-d039-4715-b7e5-8bd1fb0d46be,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: fa1b9cbb-d039-4715-b7e5-8bd1fb0d46be,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-jqk8n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b7d92727-8edb-4d89-b56b-6981e64d203c,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:22:41.364417114Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b435b439acd4093a2e37a7c79496a6073ac5bcec0bffe76f0250a600a6b2804e,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-8d985888d-l5zbg,Uid:f12e41f5-3788-4d37-a744-248e4a658254,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716200559569908382,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-8d985888d-l5zbg,io.kubernetes.pod.namespace: local-path-storage,io.k
ubernetes.pod.uid: f12e41f5-3788-4d37-a744-248e4a658254,pod-template-hash: 8d985888d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:22:38.941946677Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47f698f1ebf8e0145c7803f37e5754d179cbe8cac157acd0b94a7ecb4839a320,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-5ddbf7d777-pm765,Uid:485d8c24-bc61-48a4-a4c0-b9b52de3ea25,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716200559370653881,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-pm765,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 485d8c24-bc61-48a4-a4c0-b9b52de3ea25,pod-template-hash: 5ddbf7d777,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:22:38.676474252Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:398795
41f96cda7e362099633dad874666cee2cb433f49d1d83399de032493f5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6742392f-cc63-40e3-8735-d483096cc9d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716200558721461411,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6742392f-cc63-40e3-8735-d483096cc9d1,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"sto
rage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-20T10:22:38.411891594Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c90dcfa0d09b6e15a6ad2b46e327d0e8ee7b34aff08a9ee93fb29a9ffd837bbc,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-8j5ls,Uid:bcafd302-7969-429c-b306-290348b05d85,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716200558464717768,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-8j5ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcafd302-7969-429c-b306-290348b05d85,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:22:37.851697874Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:682f12ec6662e2ab91e09c09a10d7344e27607769a64a74270f7256754614d82,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:29cc0a3f-eade-48c7-89c4-00bb5444237c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716200557219818141,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29cc0a3f-eade-48c7-89c4-00bb5444237c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.pod
IP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-05-20T10:22:36.598467720Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:204ea35171763bae1ca9eb27e81e70fe57c5f124aea3405b582f493052272a8d,Metadata:&PodSandboxMetadata{Name:kube-proxy-twglf,Uid:6854df94-6335-4952-ba2e-a0d93ed0be5c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716200553102024111,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-twglf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6854df94-6335-4952-ba2e-a0d93ed0be5c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]str
ing{kubernetes.io/config.seen: 2024-05-20T10:22:32.166088596Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ddca6ca50d59c0f8d15a3ce84a63738e80b0e04a25fbd14d340f7e86c3809a51,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-mq6st,Uid:daaf804f-262b-4746-bc6e-d36538c125eb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716200553028257031,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-mq6st,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daaf804f-262b-4746-bc6e-d36538c125eb,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:22:32.714009061Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:46199d25b6624d514576a4aeb41f5d81193e5fcf1cf811a7e33b36aaafa683ea,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-807933,Uid:d2a66e920b1bbc9702b2e72fd1d02091,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1
716200533390910787,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a66e920b1bbc9702b2e72fd1d02091,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d2a66e920b1bbc9702b2e72fd1d02091,kubernetes.io/config.seen: 2024-05-20T10:22:12.930912548Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:97fecee9f0945fa597fccf12a6c63fae54f25bd68083e8e9d00952d9289fafed,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-807933,Uid:be6de6c8b447877c49cad2d9c3c6e55a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716200533388257264,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6de6c8b447877c49cad2d9c3c6e55a,tier: control-plane,
},Annotations:map[string]string{kubernetes.io/config.hash: be6de6c8b447877c49cad2d9c3c6e55a,kubernetes.io/config.seen: 2024-05-20T10:22:12.930911724Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:328c8e8349f476119a634d0ecfbd9b7adf95ad4840a02ad6a34fa596e92d84b1,Metadata:&PodSandboxMetadata{Name:etcd-addons-807933,Uid:170b1dec80d4c835528c7c40c5df9bd9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716200533385707793,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170b1dec80d4c835528c7c40c5df9bd9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.86:2379,kubernetes.io/config.hash: 170b1dec80d4c835528c7c40c5df9bd9,kubernetes.io/config.seen: 2024-05-20T10:22:12.930906858Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:42ec4aa1d5e4867e31253ddd10c408c2588
3d9fc8bdb92decba052d4302dc2b8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-807933,Uid:a2b9f91033eaca22c7aded584f87209e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716200533385250353,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b9f91033eaca22c7aded584f87209e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.86:8443,kubernetes.io/config.hash: a2b9f91033eaca22c7aded584f87209e,kubernetes.io/config.seen: 2024-05-20T10:22:12.930910686Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ac7d8933-a833-4fbb-ac38-4289ecec9a5e name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.971424747Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b45fe03-b46e-48d7-90a4-371ba317554f name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.971595557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b45fe03-b46e-48d7-90a4-371ba317554f name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:27:48 addons-807933 crio[678]: time="2024-05-20 10:27:48.971992141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25edf660d8e0349eab2c24d609c085b354a740db1500ee0ee30ebcfd0603f144,PodSandboxId:a2e7c7822fa76c3b9c02274ed92d1008a5dad3b91aa05077e8f901862ff71655,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716200862405014463,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-85gqz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d1a9475-8be4-4408-be74-84d6dd2f8b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b360723,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f04d6299b3c59db2d1b2009b97a2e6f596a6f70d700f8dd92710a6987519b8,PodSandboxId:0df6c01c2dac8f3e2de8c44383cdded0ff21aadf59245ad39dcf1b0f603e6266,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716200721959483961,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c7de491-1690-49fe-8238-f9e1b0120d62,},Annotations:map[string]string{io.kubern
etes.container.hash: f69fdaa6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3abcb59b60ea8124e50095e6b5505433e8979856645b96d9f626645bfca5e875,PodSandboxId:6ac45ca42c7f7717afb277078122ed11bc2f5c7da093a64844da94689ffc1dd4,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716200706373944840,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tmsmw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 3a27a456-c5bb-455f-9228-1176f6695168,},Annotations:map[string]string{io.kubernetes.container.hash: 6618428d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4ba5e754ff4a58ca20ee1cba03cfbe2b2e66f80284195ad40a28e381863ff3,PodSandboxId:a14f513ff363ebf9764340cd7b8b9244c4a4c27365ef5fffdcd61e53e9e3f744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716200638625275107,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-6cf67,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a0535f01-6cab-4cf2-8e4a-35bf2f764e62,},Annotations:map[string]string{io.kubernetes.container.hash: 8bce9a32,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f7b3afbcb9757686e16fef0788c036f791930adf5b30122c2f0909f03f5054,PodSandboxId:e879510baa51d9902df5e32fd68d26269b43b839856ad208876eb95b061a81fb,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716200624071846961,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-j5fd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 46c0e0f5-9fc4-4682-82ee-d36d4adaf012,},Annotations:map[string]string{io.kubernetes.container.hash: a296311a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ad985f33aab2992e37f69d0d95974ea7139517a2e4b5745178e777ab74da2,PodSandboxId:f6ddf1fd09945a593e516c7c892a53def3b45f470c76364bbb87054d9fd61c36,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:
1716200622695615556,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jqk8n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b7d92727-8edb-4d89-b56b-6981e64d203c,},Annotations:map[string]string{io.kubernetes.container.hash: bb129cd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cd4b11836ec1caece9bd19f5fb75a2f9a95cec04fe5c83859e36e621c5a90,PodSandboxId:b435b439acd4093a2e37a7c79496a6073ac5bcec0bffe76f0250a600a6b2804e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3f
cf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1716200615076945512,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-l5zbg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f12e41f5-3788-4d37-a744-248e4a658254,},Annotations:map[string]string{io.kubernetes.container.hash: ea447b4c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e96316e67aa9833fd4520bbee6643a19d991aa16517c65fe40dffcee51f274,PodSandboxId:47f698f1ebf8e0145c7803f37e5754d179cbe8cac157acd0b94a7ecb4839a320,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d837136
1afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716200605002947565,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-pm765,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 485d8c24-bc61-48a4-a4c0-b9b52de3ea25,},Annotations:map[string]string{io.kubernetes.container.hash: 6cff23d4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2ad193ac9f00d256ecd39b436bf964eb0d37b7b864145a28031abad002faf5,PodSandboxId:c90dcfa0d09b6e15a6ad2b46e327d0e8ee7b34aff08a9ee93fb29a9ffd837bbc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc5965
53c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716200588326514008,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8j5ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcafd302-7969-429c-b306-290348b05d85,},Annotations:map[string]string{io.kubernetes.container.hash: b0b217f4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bf199e010eb3aaef286a369bfab11ebacdf0fa8aafe96c6f2bdd660c9a4c88,PodSandboxId:39879541f96cda7e362099633dad874666cee2cb433f49d1d83399de032493f5,Metadata:&ContainerMetadata{
Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716200559437008065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6742392f-cc63-40e3-8735-d483096cc9d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7446f7ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2,PodSandboxId:ddca6ca50d59c0f8d15a3ce84a63738e80b0e04a25fbd14d340f7e86c3809a51,Metadata:&ContainerMetadata{Name:coredns
,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716200555854745557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mq6st,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daaf804f-262b-4746-bc6e-d36538c125eb,},Annotations:map[string]string{io.kubernetes.container.hash: e4a4e5da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad,PodSandboxId:204ea35171763bae1ca9eb27e81e70fe57c5f124aea3405b582f493052272a8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716200553898429247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twglf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6854df94-6335-4952-ba2e-a0d93ed0be5c,},Annotations:map[string]string{io.kubernetes.container.hash: 5af81c42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b20ded1733cce3ca061
4825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e,PodSandboxId:46199d25b6624d514576a4aeb41f5d81193e5fcf1cf811a7e33b36aaafa683ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716200533662045003,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a66e920b1bbc9702b2e72fd1d02091,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bce967d30acaffabbda8c913f4d582e5ff41
585bfba5807396323c8cad1136,PodSandboxId:97fecee9f0945fa597fccf12a6c63fae54f25bd68083e8e9d00952d9289fafed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716200533637665567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6de6c8b447877c49cad2d9c3c6e55a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc118a7deeb385af7c8c02997ad
3c01f4ac7604711393a111cce06f09c0dc27,PodSandboxId:328c8e8349f476119a634d0ecfbd9b7adf95ad4840a02ad6a34fa596e92d84b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716200533601660269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170b1dec80d4c835528c7c40c5df9bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4e35b8b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75,PodSandbox
Id:42ec4aa1d5e4867e31253ddd10c408c25883d9fc8bdb92decba052d4302dc2b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716200533552390047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b9f91033eaca22c7aded584f87209e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e34cd90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b45fe03-b46e-48d7-90a4-371ba317554f name=/runtime.v1.Run
timeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	25edf660d8e03       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      6 seconds ago       Running             hello-world-app           0                   a2e7c7822fa76       hello-world-app-86c47465fc-85gqz
	22f04d6299b3c       docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00                              2 minutes ago       Running             nginx                     0                   0df6c01c2dac8       nginx
	3abcb59b60ea8       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                        2 minutes ago       Running             headlamp                  0                   6ac45ca42c7f7       headlamp-68456f997b-tmsmw
	fa4ba5e754ff4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   a14f513ff363e       gcp-auth-5db96cd9b4-6cf67
	e7f7b3afbcb97       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             4 minutes ago       Exited              patch                     1                   e879510baa51d       ingress-nginx-admission-patch-j5fd8
	453ad985f33aa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   f6ddf1fd09945       ingress-nginx-admission-create-jqk8n
	465cd4b11836e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   b435b439acd40       local-path-provisioner-8d985888d-l5zbg
	52e96316e67aa       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   47f698f1ebf8e       yakd-dashboard-5ddbf7d777-pm765
	ba2ad193ac9f0       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   c90dcfa0d09b6       metrics-server-c59844bb4-8j5ls
	95bf199e010eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   39879541f96cd       storage-provisioner
	0c4fe6807851f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   ddca6ca50d59c       coredns-7db6d8ff4d-mq6st
	dbb511b719a84       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                             5 minutes ago       Running             kube-proxy                0                   204ea35171763       kube-proxy-twglf
	78b20ded1733c       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                             5 minutes ago       Running             kube-scheduler            0                   46199d25b6624       kube-scheduler-addons-807933
	01bce967d30ac       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                             5 minutes ago       Running             kube-controller-manager   0                   97fecee9f0945       kube-controller-manager-addons-807933
	0cc118a7deeb3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   328c8e8349f47       etcd-addons-807933
	f7dc6cd1195c0       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                             5 minutes ago       Running             kube-apiserver            0                   42ec4aa1d5e48       kube-apiserver-addons-807933
	
	
	==> coredns [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2] <==
	[INFO] 10.244.0.8:59922 - 58655 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000781145s
	[INFO] 10.244.0.8:36732 - 43419 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100817s
	[INFO] 10.244.0.8:36732 - 53913 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000282971s
	[INFO] 10.244.0.8:40035 - 49290 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063091s
	[INFO] 10.244.0.8:40035 - 11636 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012652s
	[INFO] 10.244.0.8:52237 - 22264 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096401s
	[INFO] 10.244.0.8:52237 - 45305 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000251602s
	[INFO] 10.244.0.8:55243 - 11352 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000224547s
	[INFO] 10.244.0.8:55243 - 36955 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000449759s
	[INFO] 10.244.0.8:36413 - 40609 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106138s
	[INFO] 10.244.0.8:36413 - 63399 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000174993s
	[INFO] 10.244.0.8:58756 - 63130 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054254s
	[INFO] 10.244.0.8:58756 - 42136 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057863s
	[INFO] 10.244.0.8:48740 - 462 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000062402s
	[INFO] 10.244.0.8:48740 - 39372 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000061194s
	[INFO] 10.244.0.22:35763 - 16855 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000369652s
	[INFO] 10.244.0.22:38764 - 5383 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000071741s
	[INFO] 10.244.0.22:45513 - 46688 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000085209s
	[INFO] 10.244.0.22:59596 - 25851 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000057536s
	[INFO] 10.244.0.22:33447 - 4009 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000062981s
	[INFO] 10.244.0.22:41135 - 51630 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072994s
	[INFO] 10.244.0.22:50421 - 38875 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000457828s
	[INFO] 10.244.0.22:59684 - 39361 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000414696s
	[INFO] 10.244.0.24:39961 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000566482s
	[INFO] 10.244.0.24:53873 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154189s
	
	
	==> describe nodes <==
	Name:               addons-807933
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-807933
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=addons-807933
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T10_22_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-807933
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:22:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-807933
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:27:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:25:52 +0000   Mon, 20 May 2024 10:22:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:25:52 +0000   Mon, 20 May 2024 10:22:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:25:52 +0000   Mon, 20 May 2024 10:22:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:25:52 +0000   Mon, 20 May 2024 10:22:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    addons-807933
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 f092bdd898a34a4aa0965a4ec711a781
	  System UUID:                f092bdd8-98a3-4a4a-a096-5a4ec711a781
	  Boot ID:                    8009c0da-3784-40d2-9986-64c33fdbb8fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-85gqz          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-5db96cd9b4-6cf67                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  headlamp                    headlamp-68456f997b-tmsmw                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 coredns-7db6d8ff4d-mq6st                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m17s
	  kube-system                 etcd-addons-807933                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-apiserver-addons-807933              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-controller-manager-addons-807933     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-proxy-twglf                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-scheduler-addons-807933              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 metrics-server-c59844bb4-8j5ls            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m12s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  local-path-storage          local-path-provisioner-8d985888d-l5zbg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-pm765           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m14s  kube-proxy       
	  Normal  Starting                 5m31s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m31s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m31s  kubelet          Node addons-807933 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m31s  kubelet          Node addons-807933 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m31s  kubelet          Node addons-807933 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m30s  kubelet          Node addons-807933 status is now: NodeReady
	  Normal  RegisteredNode           5m18s  node-controller  Node addons-807933 event: Registered Node addons-807933 in Controller
	
	
	==> dmesg <==
	[  +0.154950] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.148630] kauditd_printk_skb: 112 callbacks suppressed
	[  +5.272822] kauditd_printk_skb: 109 callbacks suppressed
	[  +6.280123] kauditd_printk_skb: 95 callbacks suppressed
	[May20 10:23] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.502298] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.411511] kauditd_printk_skb: 30 callbacks suppressed
	[ +19.875950] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.026579] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.671427] kauditd_printk_skb: 95 callbacks suppressed
	[  +5.427012] kauditd_printk_skb: 11 callbacks suppressed
	[May20 10:24] kauditd_printk_skb: 9 callbacks suppressed
	[ +17.038060] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.312048] kauditd_printk_skb: 24 callbacks suppressed
	[May20 10:25] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.262235] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.501325] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.721401] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.791704] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.540994] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.057244] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.935191] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.245271] kauditd_printk_skb: 33 callbacks suppressed
	[May20 10:27] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.085116] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27] <==
	{"level":"warn","ts":"2024-05-20T10:23:53.15191Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:23:52.726059Z","time spent":"425.845476ms","remote":"127.0.0.1:47296","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85507,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2024-05-20T10:23:53.151527Z","caller":"traceutil/trace.go:171","msg":"trace[1190093925] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1128; }","duration":"425.338548ms","start":"2024-05-20T10:23:52.726167Z","end":"2024-05-20T10:23:53.151505Z","steps":["trace[1190093925] 'read index received'  (duration: 425.329168ms)","trace[1190093925] 'applied index is now lower than readState.Index'  (duration: 7.613µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:23:53.161209Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.973673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T10:23:53.161263Z","caller":"traceutil/trace.go:171","msg":"trace[577843866] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1098; }","duration":"290.0538ms","start":"2024-05-20T10:23:52.8712Z","end":"2024-05-20T10:23:53.161253Z","steps":["trace[577843866] 'agreement among raft nodes before linearized reading'  (duration: 289.969227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:23:53.161561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.337168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14357"}
	{"level":"info","ts":"2024-05-20T10:23:53.161611Z","caller":"traceutil/trace.go:171","msg":"trace[769988353] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1098; }","duration":"191.451691ms","start":"2024-05-20T10:23:52.970153Z","end":"2024-05-20T10:23:53.161604Z","steps":["trace[769988353] 'agreement among raft nodes before linearized reading'  (duration: 191.324227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:23:58.143712Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.950657ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T10:23:58.143785Z","caller":"traceutil/trace.go:171","msg":"trace[161434647] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1126; }","duration":"122.037686ms","start":"2024-05-20T10:23:58.021734Z","end":"2024-05-20T10:23:58.143771Z","steps":["trace[161434647] 'range keys from in-memory index tree'  (duration: 121.933856ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:24:01.317432Z","caller":"traceutil/trace.go:171","msg":"trace[2107161008] linearizableReadLoop","detail":"{readStateIndex:1183; appliedIndex:1182; }","duration":"129.731172ms","start":"2024-05-20T10:24:01.187688Z","end":"2024-05-20T10:24:01.317419Z","steps":["trace[2107161008] 'read index received'  (duration: 129.008165ms)","trace[2107161008] 'applied index is now lower than readState.Index'  (duration: 722.379µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T10:24:01.317675Z","caller":"traceutil/trace.go:171","msg":"trace[24168834] transaction","detail":"{read_only:false; response_revision:1151; number_of_response:1; }","duration":"157.849027ms","start":"2024-05-20T10:24:01.159815Z","end":"2024-05-20T10:24:01.317664Z","steps":["trace[24168834] 'process raft request'  (duration: 157.455086ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:24:01.317916Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.236108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-8j5ls\" ","response":"range_response_count:1 size:4456"}
	{"level":"info","ts":"2024-05-20T10:24:01.317957Z","caller":"traceutil/trace.go:171","msg":"trace[831994397] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-8j5ls; range_end:; response_count:1; response_revision:1151; }","duration":"130.349534ms","start":"2024-05-20T10:24:01.187601Z","end":"2024-05-20T10:24:01.317951Z","steps":["trace[831994397] 'agreement among raft nodes before linearized reading'  (duration: 130.196879ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:25:05.880936Z","caller":"traceutil/trace.go:171","msg":"trace[1692349095] linearizableReadLoop","detail":"{readStateIndex:1387; appliedIndex:1386; }","duration":"351.199363ms","start":"2024-05-20T10:25:05.529701Z","end":"2024-05-20T10:25:05.8809Z","steps":["trace[1692349095] 'read index received'  (duration: 351.037465ms)","trace[1692349095] 'applied index is now lower than readState.Index'  (duration: 161.473µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T10:25:05.88126Z","caller":"traceutil/trace.go:171","msg":"trace[2067711876] transaction","detail":"{read_only:false; response_revision:1339; number_of_response:1; }","duration":"414.913921ms","start":"2024-05-20T10:25:05.466333Z","end":"2024-05-20T10:25:05.881247Z","steps":["trace[2067711876] 'process raft request'  (duration: 414.465075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:25:05.881526Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:25:05.466254Z","time spent":"415.105962ms","remote":"127.0.0.1:47296","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1672,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/default/registry-test\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/registry-test\" value_size:1628 >> failure:<>"}
	{"level":"warn","ts":"2024-05-20T10:25:05.881783Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"352.076658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-05-20T10:25:05.881837Z","caller":"traceutil/trace.go:171","msg":"trace[937458996] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1339; }","duration":"352.194709ms","start":"2024-05-20T10:25:05.529633Z","end":"2024-05-20T10:25:05.881827Z","steps":["trace[937458996] 'agreement among raft nodes before linearized reading'  (duration: 352.048071ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:25:05.881879Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:25:05.529619Z","time spent":"352.254984ms","remote":"127.0.0.1:47264","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":844,"request content":"key:\"/registry/persistentvolumeclaims/default/hpvc\" "}
	{"level":"info","ts":"2024-05-20T10:25:20.841822Z","caller":"traceutil/trace.go:171","msg":"trace[204895821] linearizableReadLoop","detail":"{readStateIndex:1527; appliedIndex:1526; }","duration":"174.914308ms","start":"2024-05-20T10:25:20.666886Z","end":"2024-05-20T10:25:20.8418Z","steps":["trace[204895821] 'read index received'  (duration: 174.757908ms)","trace[204895821] 'applied index is now lower than readState.Index'  (duration: 155.983µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:25:20.842026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.114553ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5949"}
	{"level":"info","ts":"2024-05-20T10:25:20.842068Z","caller":"traceutil/trace.go:171","msg":"trace[749608842] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1470; }","duration":"175.200925ms","start":"2024-05-20T10:25:20.66686Z","end":"2024-05-20T10:25:20.842061Z","steps":["trace[749608842] 'agreement among raft nodes before linearized reading'  (duration: 175.049085ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:25:20.842334Z","caller":"traceutil/trace.go:171","msg":"trace[1811531730] transaction","detail":"{read_only:false; response_revision:1470; number_of_response:1; }","duration":"221.47184ms","start":"2024-05-20T10:25:20.620796Z","end":"2024-05-20T10:25:20.842268Z","steps":["trace[1811531730] 'process raft request'  (duration: 220.891499ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:25:27.697968Z","caller":"traceutil/trace.go:171","msg":"trace[683574811] transaction","detail":"{read_only:false; response_revision:1515; number_of_response:1; }","duration":"319.176593ms","start":"2024-05-20T10:25:27.378775Z","end":"2024-05-20T10:25:27.697952Z","steps":["trace[683574811] 'process raft request'  (duration: 318.865016ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:25:27.698085Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:25:27.378758Z","time spent":"319.251391ms","remote":"127.0.0.1:47368","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1477 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-05-20T10:25:59.297022Z","caller":"traceutil/trace.go:171","msg":"trace[762250467] transaction","detail":"{read_only:false; response_revision:1763; number_of_response:1; }","duration":"242.148368ms","start":"2024-05-20T10:25:59.05486Z","end":"2024-05-20T10:25:59.297008Z","steps":["trace[762250467] 'process raft request'  (duration: 242.059861ms)"],"step_count":1}
	
	
	==> gcp-auth [fa4ba5e754ff4a58ca20ee1cba03cfbe2b2e66f80284195ad40a28e381863ff3] <==
	2024/05/20 10:23:58 GCP Auth Webhook started!
	2024/05/20 10:25:00 Ready to marshal response ...
	2024/05/20 10:25:00 Ready to write response ...
	2024/05/20 10:25:00 Ready to marshal response ...
	2024/05/20 10:25:00 Ready to write response ...
	2024/05/20 10:25:00 Ready to marshal response ...
	2024/05/20 10:25:00 Ready to write response ...
	2024/05/20 10:25:05 Ready to marshal response ...
	2024/05/20 10:25:05 Ready to write response ...
	2024/05/20 10:25:07 Ready to marshal response ...
	2024/05/20 10:25:07 Ready to write response ...
	2024/05/20 10:25:15 Ready to marshal response ...
	2024/05/20 10:25:15 Ready to write response ...
	2024/05/20 10:25:18 Ready to marshal response ...
	2024/05/20 10:25:18 Ready to write response ...
	2024/05/20 10:25:23 Ready to marshal response ...
	2024/05/20 10:25:23 Ready to write response ...
	2024/05/20 10:25:24 Ready to marshal response ...
	2024/05/20 10:25:24 Ready to write response ...
	2024/05/20 10:25:38 Ready to marshal response ...
	2024/05/20 10:25:38 Ready to write response ...
	2024/05/20 10:25:40 Ready to marshal response ...
	2024/05/20 10:25:40 Ready to write response ...
	2024/05/20 10:27:38 Ready to marshal response ...
	2024/05/20 10:27:38 Ready to write response ...
	
	
	==> kernel <==
	 10:27:49 up 6 min,  0 users,  load average: 0.38, 0.72, 0.41
	Linux addons-807933 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75] <==
	E0520 10:24:18.493787       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.157.213:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.157.213:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.157.213:443: connect: connection refused
	W0520 10:24:18.493957       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 10:24:18.494026       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0520 10:24:18.494605       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.157.213:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.157.213:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.157.213:443: connect: connection refused
	E0520 10:24:18.499670       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.157.213:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.157.213:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.157.213:443: connect: connection refused
	I0520 10:24:18.570760       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0520 10:25:00.881941       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.185.251"}
	I0520 10:25:15.719897       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0520 10:25:15.902223       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.32.238"}
	I0520 10:25:18.520816       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0520 10:25:19.561780       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0520 10:25:34.735214       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0520 10:25:55.066758       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:25:55.067246       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:25:55.095618       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:25:55.096050       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:25:55.129992       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:25:55.130155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:25:55.158256       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:25:55.158405       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0520 10:25:56.130377       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0520 10:25:56.158548       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0520 10:25:56.228507       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0520 10:27:38.948989       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.52.112"}
	
	
	==> kube-controller-manager [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136] <==
	E0520 10:26:33.174406       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:26:34.842615       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:26:34.842707       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:26:37.279926       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:26:37.279994       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:27:10.090001       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:27:10.090483       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:27:20.043749       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:27:20.043797       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:27:21.610908       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:27:21.610964       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:27:27.773377       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:27:27.773427       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 10:27:38.808921       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="63.655136ms"
	I0520 10:27:38.821740       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="12.702815ms"
	I0520 10:27:38.821981       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="58.189µs"
	I0520 10:27:38.822063       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="24.919µs"
	I0520 10:27:38.825478       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="21.804µs"
	I0520 10:27:40.966865       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0520 10:27:40.982896       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0520 10:27:40.983088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="4.093µs"
	I0520 10:27:42.963946       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="12.60032ms"
	I0520 10:27:42.964137       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="73.003µs"
	W0520 10:27:47.299069       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:27:47.299122       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad] <==
	I0520 10:22:34.957793       1 server_linux.go:69] "Using iptables proxy"
	I0520 10:22:34.981559       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.86"]
	I0520 10:22:35.096339       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 10:22:35.096386       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 10:22:35.096402       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:22:35.127204       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:22:35.127454       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:22:35.127470       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:22:35.128962       1 config.go:192] "Starting service config controller"
	I0520 10:22:35.128998       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:22:35.129024       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:22:35.129028       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:22:35.129390       1 config.go:319] "Starting node config controller"
	I0520 10:22:35.129415       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:22:35.235397       1 shared_informer.go:320] Caches are synced for node config
	I0520 10:22:35.235465       1 shared_informer.go:320] Caches are synced for service config
	I0520 10:22:35.235497       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e] <==
	E0520 10:22:16.290840       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 10:22:16.290854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 10:22:16.290861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 10:22:16.290869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 10:22:16.290875       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 10:22:16.290887       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:22:16.290911       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:22:16.290727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:22:16.291138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:22:16.291170       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 10:22:16.291044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 10:22:16.291250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 10:22:17.091875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 10:22:17.091907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 10:22:17.151236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:22:17.151269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:22:17.292437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:22:17.292485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:22:17.305855       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 10:22:17.305904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 10:22:17.503401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:22:17.503525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 10:22:17.545707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:22:17.545757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0520 10:22:17.887147       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 10:27:38 addons-807933 kubelet[1277]: I0520 10:27:38.805190    1277 memory_manager.go:354] "RemoveStaleState removing state" podUID="281dad2c-e7c6-4f48-9908-f6d1749c2207" containerName="csi-snapshotter"
	May 20 10:27:38 addons-807933 kubelet[1277]: I0520 10:27:38.805231    1277 memory_manager.go:354] "RemoveStaleState removing state" podUID="281dad2c-e7c6-4f48-9908-f6d1749c2207" containerName="csi-external-health-monitor-controller"
	May 20 10:27:38 addons-807933 kubelet[1277]: I0520 10:27:38.974252    1277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5d1a9475-8be4-4408-be74-84d6dd2f8b8c-gcp-creds\") pod \"hello-world-app-86c47465fc-85gqz\" (UID: \"5d1a9475-8be4-4408-be74-84d6dd2f8b8c\") " pod="default/hello-world-app-86c47465fc-85gqz"
	May 20 10:27:38 addons-807933 kubelet[1277]: I0520 10:27:38.974538    1277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv24n\" (UniqueName: \"kubernetes.io/projected/5d1a9475-8be4-4408-be74-84d6dd2f8b8c-kube-api-access-tv24n\") pod \"hello-world-app-86c47465fc-85gqz\" (UID: \"5d1a9475-8be4-4408-be74-84d6dd2f8b8c\") " pod="default/hello-world-app-86c47465fc-85gqz"
	May 20 10:27:39 addons-807933 kubelet[1277]: I0520 10:27:39.913262    1277 scope.go:117] "RemoveContainer" containerID="2a4e0a0d9d4eeabcde7623f866e2617c59622bb573ad36b403aa859c9daa2512"
	May 20 10:27:39 addons-807933 kubelet[1277]: I0520 10:27:39.934670    1277 scope.go:117] "RemoveContainer" containerID="2a4e0a0d9d4eeabcde7623f866e2617c59622bb573ad36b403aa859c9daa2512"
	May 20 10:27:39 addons-807933 kubelet[1277]: E0520 10:27:39.935230    1277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4e0a0d9d4eeabcde7623f866e2617c59622bb573ad36b403aa859c9daa2512\": container with ID starting with 2a4e0a0d9d4eeabcde7623f866e2617c59622bb573ad36b403aa859c9daa2512 not found: ID does not exist" containerID="2a4e0a0d9d4eeabcde7623f866e2617c59622bb573ad36b403aa859c9daa2512"
	May 20 10:27:39 addons-807933 kubelet[1277]: I0520 10:27:39.935260    1277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4e0a0d9d4eeabcde7623f866e2617c59622bb573ad36b403aa859c9daa2512"} err="failed to get container status \"2a4e0a0d9d4eeabcde7623f866e2617c59622bb573ad36b403aa859c9daa2512\": rpc error: code = NotFound desc = could not find container \"2a4e0a0d9d4eeabcde7623f866e2617c59622bb573ad36b403aa859c9daa2512\": container with ID starting with 2a4e0a0d9d4eeabcde7623f866e2617c59622bb573ad36b403aa859c9daa2512 not found: ID does not exist"
	May 20 10:27:40 addons-807933 kubelet[1277]: I0520 10:27:40.085971    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg5j9\" (UniqueName: \"kubernetes.io/projected/29cc0a3f-eade-48c7-89c4-00bb5444237c-kube-api-access-lg5j9\") pod \"29cc0a3f-eade-48c7-89c4-00bb5444237c\" (UID: \"29cc0a3f-eade-48c7-89c4-00bb5444237c\") "
	May 20 10:27:40 addons-807933 kubelet[1277]: I0520 10:27:40.088157    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29cc0a3f-eade-48c7-89c4-00bb5444237c-kube-api-access-lg5j9" (OuterVolumeSpecName: "kube-api-access-lg5j9") pod "29cc0a3f-eade-48c7-89c4-00bb5444237c" (UID: "29cc0a3f-eade-48c7-89c4-00bb5444237c"). InnerVolumeSpecName "kube-api-access-lg5j9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 10:27:40 addons-807933 kubelet[1277]: I0520 10:27:40.186518    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lg5j9\" (UniqueName: \"kubernetes.io/projected/29cc0a3f-eade-48c7-89c4-00bb5444237c-kube-api-access-lg5j9\") on node \"addons-807933\" DevicePath \"\""
	May 20 10:27:40 addons-807933 kubelet[1277]: I0520 10:27:40.650021    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29cc0a3f-eade-48c7-89c4-00bb5444237c" path="/var/lib/kubelet/pods/29cc0a3f-eade-48c7-89c4-00bb5444237c/volumes"
	May 20 10:27:42 addons-807933 kubelet[1277]: I0520 10:27:42.637630    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46c0e0f5-9fc4-4682-82ee-d36d4adaf012" path="/var/lib/kubelet/pods/46c0e0f5-9fc4-4682-82ee-d36d4adaf012/volumes"
	May 20 10:27:42 addons-807933 kubelet[1277]: I0520 10:27:42.638064    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d92727-8edb-4d89-b56b-6981e64d203c" path="/var/lib/kubelet/pods/b7d92727-8edb-4d89-b56b-6981e64d203c/volumes"
	May 20 10:27:44 addons-807933 kubelet[1277]: I0520 10:27:44.324626    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30fbb3e4-6352-4d15-99dd-fedde65cf5c0-webhook-cert\") pod \"30fbb3e4-6352-4d15-99dd-fedde65cf5c0\" (UID: \"30fbb3e4-6352-4d15-99dd-fedde65cf5c0\") "
	May 20 10:27:44 addons-807933 kubelet[1277]: I0520 10:27:44.324688    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55nwf\" (UniqueName: \"kubernetes.io/projected/30fbb3e4-6352-4d15-99dd-fedde65cf5c0-kube-api-access-55nwf\") pod \"30fbb3e4-6352-4d15-99dd-fedde65cf5c0\" (UID: \"30fbb3e4-6352-4d15-99dd-fedde65cf5c0\") "
	May 20 10:27:44 addons-807933 kubelet[1277]: I0520 10:27:44.326803    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30fbb3e4-6352-4d15-99dd-fedde65cf5c0-kube-api-access-55nwf" (OuterVolumeSpecName: "kube-api-access-55nwf") pod "30fbb3e4-6352-4d15-99dd-fedde65cf5c0" (UID: "30fbb3e4-6352-4d15-99dd-fedde65cf5c0"). InnerVolumeSpecName "kube-api-access-55nwf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 10:27:44 addons-807933 kubelet[1277]: I0520 10:27:44.328858    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fbb3e4-6352-4d15-99dd-fedde65cf5c0-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "30fbb3e4-6352-4d15-99dd-fedde65cf5c0" (UID: "30fbb3e4-6352-4d15-99dd-fedde65cf5c0"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 20 10:27:44 addons-807933 kubelet[1277]: I0520 10:27:44.425357    1277 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30fbb3e4-6352-4d15-99dd-fedde65cf5c0-webhook-cert\") on node \"addons-807933\" DevicePath \"\""
	May 20 10:27:44 addons-807933 kubelet[1277]: I0520 10:27:44.425389    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-55nwf\" (UniqueName: \"kubernetes.io/projected/30fbb3e4-6352-4d15-99dd-fedde65cf5c0-kube-api-access-55nwf\") on node \"addons-807933\" DevicePath \"\""
	May 20 10:27:44 addons-807933 kubelet[1277]: I0520 10:27:44.638229    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30fbb3e4-6352-4d15-99dd-fedde65cf5c0" path="/var/lib/kubelet/pods/30fbb3e4-6352-4d15-99dd-fedde65cf5c0/volumes"
	May 20 10:27:44 addons-807933 kubelet[1277]: I0520 10:27:44.948091    1277 scope.go:117] "RemoveContainer" containerID="67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2"
	May 20 10:27:44 addons-807933 kubelet[1277]: I0520 10:27:44.964921    1277 scope.go:117] "RemoveContainer" containerID="67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2"
	May 20 10:27:44 addons-807933 kubelet[1277]: E0520 10:27:44.965489    1277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2\": container with ID starting with 67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2 not found: ID does not exist" containerID="67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2"
	May 20 10:27:44 addons-807933 kubelet[1277]: I0520 10:27:44.965521    1277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2"} err="failed to get container status \"67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2\": rpc error: code = NotFound desc = could not find container \"67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2\": container with ID starting with 67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2 not found: ID does not exist"
	
	
	==> storage-provisioner [95bf199e010eb3aaef286a369bfab11ebacdf0fa8aafe96c6f2bdd660c9a4c88] <==
	I0520 10:22:40.504688       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 10:22:40.654568       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 10:22:40.654823       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 10:22:40.698770       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 10:22:40.698904       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-807933_44e4357c-0f68-4b46-b0f9-76a01654c97f!
	I0520 10:22:40.699527       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"019bf97e-7887-4eaa-92b6-0d457976fd1e", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-807933_44e4357c-0f68-4b46-b0f9-76a01654c97f became leader
	I0520 10:22:40.900733       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-807933_44e4357c-0f68-4b46-b0f9-76a01654c97f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-807933 -n addons-807933
helpers_test.go:261: (dbg) Run:  kubectl --context addons-807933 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (340.43s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.776606ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-8j5ls" [bcafd302-7969-429c-b306-290348b05d85] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0087042s
addons_test.go:415: (dbg) Run:  kubectl --context addons-807933 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-807933 top pods -n kube-system: exit status 1 (68.94306ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mq6st, age: 2m44.820116017s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-807933 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-807933 top pods -n kube-system: exit status 1 (67.826025ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mq6st, age: 2m46.438865722s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-807933 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-807933 top pods -n kube-system: exit status 1 (74.939325ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mq6st, age: 2m49.046603624s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-807933 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-807933 top pods -n kube-system: exit status 1 (65.602553ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mq6st, age: 2m57.590406368s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-807933 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-807933 top pods -n kube-system: exit status 1 (66.557714ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mq6st, age: 3m8.01722491s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-807933 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-807933 top pods -n kube-system: exit status 1 (61.820637ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mq6st, age: 3m28.859007717s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-807933 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-807933 top pods -n kube-system: exit status 1 (62.971809ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mq6st, age: 3m46.832130264s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-807933 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-807933 top pods -n kube-system: exit status 1 (63.524108ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mq6st, age: 4m31.679682965s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-807933 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-807933 top pods -n kube-system: exit status 1 (63.354899ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mq6st, age: 5m34.913654022s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-807933 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-807933 top pods -n kube-system: exit status 1 (62.611602ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mq6st, age: 6m57.922143646s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-807933 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-807933 top pods -n kube-system: exit status 1 (63.266302ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-mq6st, age: 8m17.424705945s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-807933 -n addons-807933
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-807933 logs -n 25: (1.378033815s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:21 UTC |
	| delete  | -p download-only-824408                                                                     | download-only-824408 | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:21 UTC |
	| delete  | -p download-only-865403                                                                     | download-only-865403 | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:21 UTC |
	| delete  | -p download-only-824408                                                                     | download-only-824408 | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-680660 | jenkins | v1.33.1 | 20 May 24 10:21 UTC |                     |
	|         | binary-mirror-680660                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45873                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-680660                                                                     | binary-mirror-680660 | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:21 UTC |                     |
	|         | addons-807933                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:21 UTC |                     |
	|         | addons-807933                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-807933 --wait=true                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:24 UTC | 20 May 24 10:24 UTC |
	|         | addons-807933                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:24 UTC | 20 May 24 10:25 UTC |
	|         | -p addons-807933                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | -p addons-807933                                                                            |                      |         |         |                     |                     |
	| ip      | addons-807933 ip                                                                            | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	| addons  | addons-807933 addons disable                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-807933 addons disable                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | addons-807933                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-807933 ssh curl -s                                                                   | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-807933 ssh cat                                                                       | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | /opt/local-path-provisioner/pvc-c2976ab9-5fd0-4fd1-8996-109c08730171_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-807933 addons disable                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-807933 addons                                                                        | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-807933 addons                                                                        | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:25 UTC | 20 May 24 10:25 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-807933 ip                                                                            | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:27 UTC | 20 May 24 10:27 UTC |
	| addons  | addons-807933 addons disable                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:27 UTC | 20 May 24 10:27 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-807933 addons disable                                                                | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:27 UTC | 20 May 24 10:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-807933 addons                                                                        | addons-807933        | jenkins | v1.33.1 | 20 May 24 10:30 UTC | 20 May 24 10:30 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:21:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:21:36.901543   12352 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:21:36.901773   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:21:36.901781   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:21:36.901785   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:21:36.901985   12352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:21:36.902556   12352 out.go:298] Setting JSON to false
	I0520 10:21:36.903332   12352 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":240,"bootTime":1716200257,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 10:21:36.903389   12352 start.go:139] virtualization: kvm guest
	I0520 10:21:36.905525   12352 out.go:177] * [addons-807933] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 10:21:36.907151   12352 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:21:36.908506   12352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:21:36.907133   12352 notify.go:220] Checking for updates...
	I0520 10:21:36.911067   12352 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:21:36.912327   12352 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:21:36.913462   12352 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 10:21:36.914680   12352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:21:36.916063   12352 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:21:36.947276   12352 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 10:21:36.948582   12352 start.go:297] selected driver: kvm2
	I0520 10:21:36.948598   12352 start.go:901] validating driver "kvm2" against <nil>
	I0520 10:21:36.948624   12352 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:21:36.949330   12352 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:21:36.949409   12352 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 10:21:36.963367   12352 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 10:21:36.963409   12352 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:21:36.963609   12352 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:21:36.963634   12352 cni.go:84] Creating CNI manager for ""
	I0520 10:21:36.963641   12352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 10:21:36.963650   12352 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 10:21:36.963692   12352 start.go:340] cluster config:
	{Name:addons-807933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-807933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:21:36.963773   12352 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:21:36.965537   12352 out.go:177] * Starting "addons-807933" primary control-plane node in "addons-807933" cluster
	I0520 10:21:36.966826   12352 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:21:36.966861   12352 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 10:21:36.966871   12352 cache.go:56] Caching tarball of preloaded images
	I0520 10:21:36.966962   12352 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 10:21:36.966977   12352 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:21:36.967398   12352 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/config.json ...
	I0520 10:21:36.967424   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/config.json: {Name:mk89a6bd9d43695cbac9dcfd1e40eefc7cccf357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:21:36.967568   12352 start.go:360] acquireMachinesLock for addons-807933: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 10:21:36.967614   12352 start.go:364] duration metric: took 32.236µs to acquireMachinesLock for "addons-807933"
	I0520 10:21:36.967631   12352 start.go:93] Provisioning new machine with config: &{Name:addons-807933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-807933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:21:36.967720   12352 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 10:21:36.969275   12352 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0520 10:21:36.969385   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:21:36.969426   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:21:36.982852   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0520 10:21:36.983252   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:21:36.983741   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:21:36.983761   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:21:36.984071   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:21:36.984234   12352 main.go:141] libmachine: (addons-807933) Calling .GetMachineName
	I0520 10:21:36.984394   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:21:36.984512   12352 start.go:159] libmachine.API.Create for "addons-807933" (driver="kvm2")
	I0520 10:21:36.984541   12352 client.go:168] LocalClient.Create starting
	I0520 10:21:36.984586   12352 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 10:21:37.161005   12352 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 10:21:37.333200   12352 main.go:141] libmachine: Running pre-create checks...
	I0520 10:21:37.333224   12352 main.go:141] libmachine: (addons-807933) Calling .PreCreateCheck
	I0520 10:21:37.333721   12352 main.go:141] libmachine: (addons-807933) Calling .GetConfigRaw
	I0520 10:21:37.334141   12352 main.go:141] libmachine: Creating machine...
	I0520 10:21:37.334155   12352 main.go:141] libmachine: (addons-807933) Calling .Create
	I0520 10:21:37.334293   12352 main.go:141] libmachine: (addons-807933) Creating KVM machine...
	I0520 10:21:37.335577   12352 main.go:141] libmachine: (addons-807933) DBG | found existing default KVM network
	I0520 10:21:37.336287   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:37.336159   12374 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0520 10:21:37.336308   12352 main.go:141] libmachine: (addons-807933) DBG | created network xml: 
	I0520 10:21:37.336321   12352 main.go:141] libmachine: (addons-807933) DBG | <network>
	I0520 10:21:37.336335   12352 main.go:141] libmachine: (addons-807933) DBG |   <name>mk-addons-807933</name>
	I0520 10:21:37.336353   12352 main.go:141] libmachine: (addons-807933) DBG |   <dns enable='no'/>
	I0520 10:21:37.336365   12352 main.go:141] libmachine: (addons-807933) DBG |   
	I0520 10:21:37.336404   12352 main.go:141] libmachine: (addons-807933) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 10:21:37.336424   12352 main.go:141] libmachine: (addons-807933) DBG |     <dhcp>
	I0520 10:21:37.336432   12352 main.go:141] libmachine: (addons-807933) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 10:21:37.336441   12352 main.go:141] libmachine: (addons-807933) DBG |     </dhcp>
	I0520 10:21:37.336447   12352 main.go:141] libmachine: (addons-807933) DBG |   </ip>
	I0520 10:21:37.336452   12352 main.go:141] libmachine: (addons-807933) DBG |   
	I0520 10:21:37.336458   12352 main.go:141] libmachine: (addons-807933) DBG | </network>
	I0520 10:21:37.336466   12352 main.go:141] libmachine: (addons-807933) DBG | 
	I0520 10:21:37.341935   12352 main.go:141] libmachine: (addons-807933) DBG | trying to create private KVM network mk-addons-807933 192.168.39.0/24...
	I0520 10:21:37.403821   12352 main.go:141] libmachine: (addons-807933) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933 ...
	I0520 10:21:37.403854   12352 main.go:141] libmachine: (addons-807933) DBG | private KVM network mk-addons-807933 192.168.39.0/24 created
	I0520 10:21:37.403867   12352 main.go:141] libmachine: (addons-807933) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 10:21:37.403885   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:37.403748   12374 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:21:37.403939   12352 main.go:141] libmachine: (addons-807933) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 10:21:37.665472   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:37.665383   12374 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa...
	I0520 10:21:37.748524   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:37.748399   12374 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/addons-807933.rawdisk...
	I0520 10:21:37.748551   12352 main.go:141] libmachine: (addons-807933) DBG | Writing magic tar header
	I0520 10:21:37.748561   12352 main.go:141] libmachine: (addons-807933) DBG | Writing SSH key tar header
	I0520 10:21:37.748572   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:37.748540   12374 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933 ...
	I0520 10:21:37.748669   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933
	I0520 10:21:37.748690   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 10:21:37.748703   12352 main.go:141] libmachine: (addons-807933) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933 (perms=drwx------)
	I0520 10:21:37.748716   12352 main.go:141] libmachine: (addons-807933) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 10:21:37.748726   12352 main.go:141] libmachine: (addons-807933) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 10:21:37.748741   12352 main.go:141] libmachine: (addons-807933) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 10:21:37.748752   12352 main.go:141] libmachine: (addons-807933) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 10:21:37.748766   12352 main.go:141] libmachine: (addons-807933) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 10:21:37.748782   12352 main.go:141] libmachine: (addons-807933) Creating domain...
	I0520 10:21:37.748792   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:21:37.748805   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 10:21:37.748816   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 10:21:37.748827   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home/jenkins
	I0520 10:21:37.748839   12352 main.go:141] libmachine: (addons-807933) DBG | Checking permissions on dir: /home
	I0520 10:21:37.748866   12352 main.go:141] libmachine: (addons-807933) DBG | Skipping /home - not owner
	I0520 10:21:37.749721   12352 main.go:141] libmachine: (addons-807933) define libvirt domain using xml: 
	I0520 10:21:37.749733   12352 main.go:141] libmachine: (addons-807933) <domain type='kvm'>
	I0520 10:21:37.749742   12352 main.go:141] libmachine: (addons-807933)   <name>addons-807933</name>
	I0520 10:21:37.749755   12352 main.go:141] libmachine: (addons-807933)   <memory unit='MiB'>4000</memory>
	I0520 10:21:37.749764   12352 main.go:141] libmachine: (addons-807933)   <vcpu>2</vcpu>
	I0520 10:21:37.749778   12352 main.go:141] libmachine: (addons-807933)   <features>
	I0520 10:21:37.749788   12352 main.go:141] libmachine: (addons-807933)     <acpi/>
	I0520 10:21:37.749797   12352 main.go:141] libmachine: (addons-807933)     <apic/>
	I0520 10:21:37.749805   12352 main.go:141] libmachine: (addons-807933)     <pae/>
	I0520 10:21:37.749814   12352 main.go:141] libmachine: (addons-807933)     
	I0520 10:21:37.749819   12352 main.go:141] libmachine: (addons-807933)   </features>
	I0520 10:21:37.749824   12352 main.go:141] libmachine: (addons-807933)   <cpu mode='host-passthrough'>
	I0520 10:21:37.749833   12352 main.go:141] libmachine: (addons-807933)   
	I0520 10:21:37.749846   12352 main.go:141] libmachine: (addons-807933)   </cpu>
	I0520 10:21:37.749854   12352 main.go:141] libmachine: (addons-807933)   <os>
	I0520 10:21:37.749861   12352 main.go:141] libmachine: (addons-807933)     <type>hvm</type>
	I0520 10:21:37.749908   12352 main.go:141] libmachine: (addons-807933)     <boot dev='cdrom'/>
	I0520 10:21:37.749932   12352 main.go:141] libmachine: (addons-807933)     <boot dev='hd'/>
	I0520 10:21:37.749956   12352 main.go:141] libmachine: (addons-807933)     <bootmenu enable='no'/>
	I0520 10:21:37.749977   12352 main.go:141] libmachine: (addons-807933)   </os>
	I0520 10:21:37.749990   12352 main.go:141] libmachine: (addons-807933)   <devices>
	I0520 10:21:37.750006   12352 main.go:141] libmachine: (addons-807933)     <disk type='file' device='cdrom'>
	I0520 10:21:37.750028   12352 main.go:141] libmachine: (addons-807933)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/boot2docker.iso'/>
	I0520 10:21:37.750040   12352 main.go:141] libmachine: (addons-807933)       <target dev='hdc' bus='scsi'/>
	I0520 10:21:37.750050   12352 main.go:141] libmachine: (addons-807933)       <readonly/>
	I0520 10:21:37.750062   12352 main.go:141] libmachine: (addons-807933)     </disk>
	I0520 10:21:37.750072   12352 main.go:141] libmachine: (addons-807933)     <disk type='file' device='disk'>
	I0520 10:21:37.750081   12352 main.go:141] libmachine: (addons-807933)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 10:21:37.750089   12352 main.go:141] libmachine: (addons-807933)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/addons-807933.rawdisk'/>
	I0520 10:21:37.750097   12352 main.go:141] libmachine: (addons-807933)       <target dev='hda' bus='virtio'/>
	I0520 10:21:37.750102   12352 main.go:141] libmachine: (addons-807933)     </disk>
	I0520 10:21:37.750109   12352 main.go:141] libmachine: (addons-807933)     <interface type='network'>
	I0520 10:21:37.750115   12352 main.go:141] libmachine: (addons-807933)       <source network='mk-addons-807933'/>
	I0520 10:21:37.750122   12352 main.go:141] libmachine: (addons-807933)       <model type='virtio'/>
	I0520 10:21:37.750133   12352 main.go:141] libmachine: (addons-807933)     </interface>
	I0520 10:21:37.750141   12352 main.go:141] libmachine: (addons-807933)     <interface type='network'>
	I0520 10:21:37.750148   12352 main.go:141] libmachine: (addons-807933)       <source network='default'/>
	I0520 10:21:37.750152   12352 main.go:141] libmachine: (addons-807933)       <model type='virtio'/>
	I0520 10:21:37.750159   12352 main.go:141] libmachine: (addons-807933)     </interface>
	I0520 10:21:37.750164   12352 main.go:141] libmachine: (addons-807933)     <serial type='pty'>
	I0520 10:21:37.750172   12352 main.go:141] libmachine: (addons-807933)       <target port='0'/>
	I0520 10:21:37.750176   12352 main.go:141] libmachine: (addons-807933)     </serial>
	I0520 10:21:37.750184   12352 main.go:141] libmachine: (addons-807933)     <console type='pty'>
	I0520 10:21:37.750189   12352 main.go:141] libmachine: (addons-807933)       <target type='serial' port='0'/>
	I0520 10:21:37.750197   12352 main.go:141] libmachine: (addons-807933)     </console>
	I0520 10:21:37.750202   12352 main.go:141] libmachine: (addons-807933)     <rng model='virtio'>
	I0520 10:21:37.750211   12352 main.go:141] libmachine: (addons-807933)       <backend model='random'>/dev/random</backend>
	I0520 10:21:37.750218   12352 main.go:141] libmachine: (addons-807933)     </rng>
	I0520 10:21:37.750223   12352 main.go:141] libmachine: (addons-807933)     
	I0520 10:21:37.750229   12352 main.go:141] libmachine: (addons-807933)     
	I0520 10:21:37.750235   12352 main.go:141] libmachine: (addons-807933)   </devices>
	I0520 10:21:37.750241   12352 main.go:141] libmachine: (addons-807933) </domain>
	I0520 10:21:37.750248   12352 main.go:141] libmachine: (addons-807933) 
	I0520 10:21:37.755765   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:7b:b6:89 in network default
	I0520 10:21:37.756304   12352 main.go:141] libmachine: (addons-807933) Ensuring networks are active...
	I0520 10:21:37.756324   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:37.756900   12352 main.go:141] libmachine: (addons-807933) Ensuring network default is active
	I0520 10:21:37.757195   12352 main.go:141] libmachine: (addons-807933) Ensuring network mk-addons-807933 is active
	I0520 10:21:37.757664   12352 main.go:141] libmachine: (addons-807933) Getting domain xml...
	I0520 10:21:37.758263   12352 main.go:141] libmachine: (addons-807933) Creating domain...
	I0520 10:21:39.114451   12352 main.go:141] libmachine: (addons-807933) Waiting to get IP...
	I0520 10:21:39.115120   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:39.115504   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:39.115556   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:39.115494   12374 retry.go:31] will retry after 207.51322ms: waiting for machine to come up
	I0520 10:21:39.325016   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:39.325418   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:39.325464   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:39.325412   12374 retry.go:31] will retry after 290.80023ms: waiting for machine to come up
	I0520 10:21:39.617921   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:39.618379   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:39.618401   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:39.618362   12374 retry.go:31] will retry after 329.008994ms: waiting for machine to come up
	I0520 10:21:39.949057   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:39.949514   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:39.949543   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:39.949455   12374 retry.go:31] will retry after 519.550951ms: waiting for machine to come up
	I0520 10:21:40.470072   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:40.470752   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:40.470801   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:40.470688   12374 retry.go:31] will retry after 652.239224ms: waiting for machine to come up
	I0520 10:21:41.124533   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:41.124997   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:41.125025   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:41.124919   12374 retry.go:31] will retry after 674.149446ms: waiting for machine to come up
	I0520 10:21:41.800868   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:41.801371   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:41.801392   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:41.801338   12374 retry.go:31] will retry after 1.15805065s: waiting for machine to come up
	I0520 10:21:42.960475   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:42.960840   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:42.960874   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:42.960831   12374 retry.go:31] will retry after 1.145933963s: waiting for machine to come up
	I0520 10:21:44.108108   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:44.108501   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:44.108529   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:44.108461   12374 retry.go:31] will retry after 1.384710006s: waiting for machine to come up
	I0520 10:21:45.495028   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:45.495367   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:45.495381   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:45.495335   12374 retry.go:31] will retry after 1.836113332s: waiting for machine to come up
	I0520 10:21:47.333316   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:47.333741   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:47.333775   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:47.333665   12374 retry.go:31] will retry after 2.54852114s: waiting for machine to come up
	I0520 10:21:49.883302   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:49.883637   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:49.883659   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:49.883594   12374 retry.go:31] will retry after 2.68971717s: waiting for machine to come up
	I0520 10:21:52.574915   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:52.575294   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:52.575322   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:52.575261   12374 retry.go:31] will retry after 3.09017513s: waiting for machine to come up
	I0520 10:21:55.669446   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:21:55.669871   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find current IP address of domain addons-807933 in network mk-addons-807933
	I0520 10:21:55.669899   12352 main.go:141] libmachine: (addons-807933) DBG | I0520 10:21:55.669813   12374 retry.go:31] will retry after 4.957723981s: waiting for machine to come up
	I0520 10:22:00.629996   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.630423   12352 main.go:141] libmachine: (addons-807933) Found IP for machine: 192.168.39.86
	I0520 10:22:00.630438   12352 main.go:141] libmachine: (addons-807933) Reserving static IP address...
	I0520 10:22:00.630463   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has current primary IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.630857   12352 main.go:141] libmachine: (addons-807933) DBG | unable to find host DHCP lease matching {name: "addons-807933", mac: "52:54:00:ab:70:6f", ip: "192.168.39.86"} in network mk-addons-807933
	I0520 10:22:00.698284   12352 main.go:141] libmachine: (addons-807933) DBG | Getting to WaitForSSH function...
	I0520 10:22:00.698315   12352 main.go:141] libmachine: (addons-807933) Reserved static IP address: 192.168.39.86
	I0520 10:22:00.698329   12352 main.go:141] libmachine: (addons-807933) Waiting for SSH to be available...
	I0520 10:22:00.700476   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.700791   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:00.700823   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.700980   12352 main.go:141] libmachine: (addons-807933) DBG | Using SSH client type: external
	I0520 10:22:00.701010   12352 main.go:141] libmachine: (addons-807933) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa (-rw-------)
	I0520 10:22:00.701050   12352 main.go:141] libmachine: (addons-807933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 10:22:00.701076   12352 main.go:141] libmachine: (addons-807933) DBG | About to run SSH command:
	I0520 10:22:00.701090   12352 main.go:141] libmachine: (addons-807933) DBG | exit 0
	I0520 10:22:00.833271   12352 main.go:141] libmachine: (addons-807933) DBG | SSH cmd err, output: <nil>: 
	I0520 10:22:00.833552   12352 main.go:141] libmachine: (addons-807933) KVM machine creation complete!
	I0520 10:22:00.833867   12352 main.go:141] libmachine: (addons-807933) Calling .GetConfigRaw
	I0520 10:22:00.834386   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:00.834581   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:00.834739   12352 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 10:22:00.834752   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:00.835872   12352 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 10:22:00.835886   12352 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 10:22:00.835891   12352 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 10:22:00.835896   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:00.838191   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.838537   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:00.838565   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.838706   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:00.838880   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:00.839030   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:00.839137   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:00.839269   12352 main.go:141] libmachine: Using SSH client type: native
	I0520 10:22:00.839509   12352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0520 10:22:00.839523   12352 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 10:22:00.944029   12352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:22:00.944068   12352 main.go:141] libmachine: Detecting the provisioner...
	I0520 10:22:00.944079   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:00.946464   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.946849   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:00.946873   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:00.946986   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:00.947178   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:00.947337   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:00.947491   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:00.947682   12352 main.go:141] libmachine: Using SSH client type: native
	I0520 10:22:00.947888   12352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0520 10:22:00.947904   12352 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 10:22:01.053668   12352 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 10:22:01.053784   12352 main.go:141] libmachine: found compatible host: buildroot
	I0520 10:22:01.053808   12352 main.go:141] libmachine: Provisioning with buildroot...
	I0520 10:22:01.053819   12352 main.go:141] libmachine: (addons-807933) Calling .GetMachineName
	I0520 10:22:01.054066   12352 buildroot.go:166] provisioning hostname "addons-807933"
	I0520 10:22:01.054090   12352 main.go:141] libmachine: (addons-807933) Calling .GetMachineName
	I0520 10:22:01.054300   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:01.056504   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.056829   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.056854   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.056979   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:01.057135   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.057280   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.057549   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:01.057706   12352 main.go:141] libmachine: Using SSH client type: native
	I0520 10:22:01.057867   12352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0520 10:22:01.057879   12352 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-807933 && echo "addons-807933" | sudo tee /etc/hostname
	I0520 10:22:01.179301   12352 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-807933
	
	I0520 10:22:01.179322   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:01.182868   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.183240   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.183268   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.183565   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:01.183732   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.183888   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.184023   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:01.184181   12352 main.go:141] libmachine: Using SSH client type: native
	I0520 10:22:01.184384   12352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0520 10:22:01.184410   12352 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-807933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-807933/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-807933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:22:01.301764   12352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:22:01.301799   12352 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 10:22:01.301826   12352 buildroot.go:174] setting up certificates
	I0520 10:22:01.301837   12352 provision.go:84] configureAuth start
	I0520 10:22:01.301845   12352 main.go:141] libmachine: (addons-807933) Calling .GetMachineName
	I0520 10:22:01.302115   12352 main.go:141] libmachine: (addons-807933) Calling .GetIP
	I0520 10:22:01.304657   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.305011   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.305045   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.305193   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:01.307005   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.307297   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.307328   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.307423   12352 provision.go:143] copyHostCerts
	I0520 10:22:01.307499   12352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 10:22:01.307624   12352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 10:22:01.307700   12352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 10:22:01.307763   12352 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.addons-807933 san=[127.0.0.1 192.168.39.86 addons-807933 localhost minikube]
	I0520 10:22:01.485009   12352 provision.go:177] copyRemoteCerts
	I0520 10:22:01.485065   12352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:22:01.485087   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:01.487300   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.487569   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.487600   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.487727   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:01.487879   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.488020   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:01.488178   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:01.571179   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:22:01.595793   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 10:22:01.620065   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 10:22:01.643224   12352 provision.go:87] duration metric: took 341.376852ms to configureAuth
	I0520 10:22:01.643245   12352 buildroot.go:189] setting minikube options for container-runtime
	I0520 10:22:01.643395   12352 config.go:182] Loaded profile config "addons-807933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:22:01.643458   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:01.645898   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.646173   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.646195   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.646365   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:01.646514   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.646624   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.646705   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:01.646805   12352 main.go:141] libmachine: Using SSH client type: native
	I0520 10:22:01.646994   12352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0520 10:22:01.647009   12352 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:22:01.910427   12352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:22:01.910456   12352 main.go:141] libmachine: Checking connection to Docker...
	I0520 10:22:01.910465   12352 main.go:141] libmachine: (addons-807933) Calling .GetURL
	I0520 10:22:01.911672   12352 main.go:141] libmachine: (addons-807933) DBG | Using libvirt version 6000000
	I0520 10:22:01.913573   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.913899   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.913921   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.914116   12352 main.go:141] libmachine: Docker is up and running!
	I0520 10:22:01.914136   12352 main.go:141] libmachine: Reticulating splines...
	I0520 10:22:01.914144   12352 client.go:171] duration metric: took 24.929592731s to LocalClient.Create
	I0520 10:22:01.914169   12352 start.go:167] duration metric: took 24.92965939s to libmachine.API.Create "addons-807933"
	I0520 10:22:01.914179   12352 start.go:293] postStartSetup for "addons-807933" (driver="kvm2")
	I0520 10:22:01.914198   12352 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:22:01.914223   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:01.914460   12352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:22:01.914483   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:01.916363   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.916624   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:01.916655   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:01.916748   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:01.916907   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:01.917074   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:01.917196   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:01.999784   12352 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:22:02.003909   12352 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 10:22:02.003930   12352 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 10:22:02.003993   12352 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 10:22:02.004014   12352 start.go:296] duration metric: took 89.826536ms for postStartSetup
	I0520 10:22:02.004044   12352 main.go:141] libmachine: (addons-807933) Calling .GetConfigRaw
	I0520 10:22:02.004542   12352 main.go:141] libmachine: (addons-807933) Calling .GetIP
	I0520 10:22:02.006866   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.007185   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:02.007211   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.007467   12352 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/config.json ...
	I0520 10:22:02.007661   12352 start.go:128] duration metric: took 25.039930927s to createHost
	I0520 10:22:02.007684   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:02.009613   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.009902   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:02.009929   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.010001   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:02.010198   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:02.010335   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:02.010490   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:02.010597   12352 main.go:141] libmachine: Using SSH client type: native
	I0520 10:22:02.010757   12352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0520 10:22:02.010767   12352 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 10:22:02.117605   12352 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716200522.094602142
	
	I0520 10:22:02.117634   12352 fix.go:216] guest clock: 1716200522.094602142
	I0520 10:22:02.117645   12352 fix.go:229] Guest: 2024-05-20 10:22:02.094602142 +0000 UTC Remote: 2024-05-20 10:22:02.007672113 +0000 UTC m=+25.136645980 (delta=86.930029ms)
	I0520 10:22:02.117703   12352 fix.go:200] guest clock delta is within tolerance: 86.930029ms
	I0520 10:22:02.117715   12352 start.go:83] releasing machines lock for "addons-807933", held for 25.150089452s
	I0520 10:22:02.117743   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:02.118010   12352 main.go:141] libmachine: (addons-807933) Calling .GetIP
	I0520 10:22:02.120226   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.120468   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:02.120505   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.120582   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:02.121176   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:02.121355   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:02.121426   12352 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:22:02.121483   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:02.121564   12352 ssh_runner.go:195] Run: cat /version.json
	I0520 10:22:02.121595   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:02.123698   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.124019   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:02.124045   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.124207   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.124229   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:02.124407   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:02.124558   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:02.124611   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:02.124636   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:02.124667   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:02.124820   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:02.124971   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:02.125131   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:02.125242   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	W0520 10:22:02.226073   12352 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 10:22:02.226157   12352 ssh_runner.go:195] Run: systemctl --version
	I0520 10:22:02.232287   12352 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:22:02.391810   12352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 10:22:02.397827   12352 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 10:22:02.397882   12352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:22:02.414487   12352 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 10:22:02.414515   12352 start.go:494] detecting cgroup driver to use...
	I0520 10:22:02.414599   12352 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:22:02.431872   12352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:22:02.446114   12352 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:22:02.446173   12352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:22:02.459737   12352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:22:02.472848   12352 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:22:02.594036   12352 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:22:02.738348   12352 docker.go:233] disabling docker service ...
	I0520 10:22:02.738405   12352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:22:02.752143   12352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:22:02.765332   12352 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:22:02.885365   12352 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:22:03.004315   12352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:22:03.018879   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:22:03.037825   12352 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:22:03.037880   12352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.048182   12352 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:22:03.048244   12352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.058326   12352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.068331   12352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.078511   12352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:22:03.089130   12352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.099161   12352 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.116582   12352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:22:03.126637   12352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:22:03.135609   12352 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 10:22:03.135663   12352 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 10:22:03.148608   12352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:22:03.158172   12352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:22:03.267905   12352 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:22:03.403443   12352 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:22:03.403521   12352 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:22:03.408494   12352 start.go:562] Will wait 60s for crictl version
	I0520 10:22:03.408538   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:22:03.412131   12352 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:22:03.451794   12352 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 10:22:03.451899   12352 ssh_runner.go:195] Run: crio --version
	I0520 10:22:03.479305   12352 ssh_runner.go:195] Run: crio --version
	I0520 10:22:03.507771   12352 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 10:22:03.509123   12352 main.go:141] libmachine: (addons-807933) Calling .GetIP
	I0520 10:22:03.511510   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:03.511842   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:03.511871   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:03.512077   12352 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 10:22:03.516133   12352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:22:03.528472   12352 kubeadm.go:877] updating cluster {Name:addons-807933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-807933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 10:22:03.528599   12352 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:22:03.528659   12352 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:22:03.560373   12352 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 10:22:03.560487   12352 ssh_runner.go:195] Run: which lz4
	I0520 10:22:03.564446   12352 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 10:22:03.568574   12352 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 10:22:03.568599   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 10:22:04.882322   12352 crio.go:462] duration metric: took 1.317908901s to copy over tarball
	I0520 10:22:04.882388   12352 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 10:22:07.176719   12352 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.294308037s)
	I0520 10:22:07.176741   12352 crio.go:469] duration metric: took 2.2943906s to extract the tarball
	I0520 10:22:07.176750   12352 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 10:22:07.214678   12352 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:22:07.255971   12352 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:22:07.256000   12352 cache_images.go:84] Images are preloaded, skipping loading
	I0520 10:22:07.256010   12352 kubeadm.go:928] updating node { 192.168.39.86 8443 v1.30.1 crio true true} ...
	I0520 10:22:07.256141   12352 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-807933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-807933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:22:07.256217   12352 ssh_runner.go:195] Run: crio config
	I0520 10:22:07.305430   12352 cni.go:84] Creating CNI manager for ""
	I0520 10:22:07.305458   12352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 10:22:07.305480   12352 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 10:22:07.305505   12352 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.86 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-807933 NodeName:addons-807933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 10:22:07.305665   12352 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-807933"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 10:22:07.305731   12352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:22:07.315662   12352 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 10:22:07.315740   12352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 10:22:07.325323   12352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0520 10:22:07.341600   12352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:22:07.357804   12352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0520 10:22:07.374208   12352 ssh_runner.go:195] Run: grep 192.168.39.86	control-plane.minikube.internal$ /etc/hosts
	I0520 10:22:07.377818   12352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:22:07.389721   12352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:22:07.506821   12352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:22:07.524640   12352 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933 for IP: 192.168.39.86
	I0520 10:22:07.524670   12352 certs.go:194] generating shared ca certs ...
	I0520 10:22:07.524690   12352 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:07.524860   12352 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 10:22:07.859161   12352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt ...
	I0520 10:22:07.859192   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt: {Name:mkeca0cd0f9f687c315d5a3735bd94b1a83d9372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:07.859393   12352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key ...
	I0520 10:22:07.859408   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key: {Name:mk1adfa07dfdb8a07e4270fc6a839960f920bb18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:07.859505   12352 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 10:22:08.002033   12352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt ...
	I0520 10:22:08.002064   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt: {Name:mk10798fc9395015430d2b33bb5f4e7d2bba6c81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.002249   12352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key ...
	I0520 10:22:08.002266   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key: {Name:mk762fd9601e1dca9b2da8662366674c5d5778e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.002362   12352 certs.go:256] generating profile certs ...
	I0520 10:22:08.002430   12352 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.key
	I0520 10:22:08.002445   12352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt with IP's: []
	I0520 10:22:08.302306   12352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt ...
	I0520 10:22:08.302337   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: {Name:mk62198dfa51dceca363447b830b722c8dfe43cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.302533   12352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.key ...
	I0520 10:22:08.302553   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.key: {Name:mk54e198ed76401d154d64eca9152db2e37699d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.302653   12352 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.key.bb918124
	I0520 10:22:08.302679   12352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.crt.bb918124 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.86]
	I0520 10:22:08.364166   12352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.crt.bb918124 ...
	I0520 10:22:08.364193   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.crt.bb918124: {Name:mk645a54b93c57c3c359c77aba86b289ab5b5bbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.364370   12352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.key.bb918124 ...
	I0520 10:22:08.364387   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.key.bb918124: {Name:mk69018507f95882ab2d08281937ca12be65320f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.364478   12352 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.crt.bb918124 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.crt
	I0520 10:22:08.364571   12352 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.key.bb918124 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.key
	I0520 10:22:08.364635   12352 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.key
	I0520 10:22:08.364659   12352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.crt with IP's: []
	I0520 10:22:08.433556   12352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.crt ...
	I0520 10:22:08.433585   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.crt: {Name:mk83e9f6d29aa6cbc445f9297c8ad2ac57f2cafa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.433758   12352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.key ...
	I0520 10:22:08.433774   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.key: {Name:mk28d74a5926637e890a27b48152bcf69ab76f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:08.433987   12352 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 10:22:08.434025   12352 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:22:08.434061   12352 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:22:08.434095   12352 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 10:22:08.434632   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:22:08.467414   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:22:08.493807   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:22:08.521816   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 10:22:08.546906   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 10:22:08.571086   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 10:22:08.594923   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:22:08.619672   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:22:08.643539   12352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:22:08.667547   12352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 10:22:08.685139   12352 ssh_runner.go:195] Run: openssl version
	I0520 10:22:08.691090   12352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:22:08.701658   12352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:22:08.706358   12352 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:22:08.706412   12352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:22:08.712274   12352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:22:08.723289   12352 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:22:08.727589   12352 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 10:22:08.727639   12352 kubeadm.go:391] StartCluster: {Name:addons-807933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-807933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:22:08.727720   12352 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 10:22:08.727760   12352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 10:22:08.763557   12352 cri.go:89] found id: ""
	I0520 10:22:08.763622   12352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 10:22:08.773781   12352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 10:22:08.783203   12352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 10:22:08.792601   12352 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 10:22:08.792623   12352 kubeadm.go:156] found existing configuration files:
	
	I0520 10:22:08.792669   12352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 10:22:08.802078   12352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 10:22:08.802192   12352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 10:22:08.811829   12352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 10:22:08.820648   12352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 10:22:08.820704   12352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 10:22:08.829829   12352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 10:22:08.838729   12352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 10:22:08.838790   12352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 10:22:08.848329   12352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 10:22:08.857258   12352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 10:22:08.857311   12352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 10:22:08.866624   12352 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 10:22:08.925514   12352 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 10:22:08.925655   12352 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 10:22:09.069923   12352 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 10:22:09.070048   12352 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 10:22:09.070184   12352 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 10:22:09.286467   12352 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 10:22:09.491823   12352 out.go:204]   - Generating certificates and keys ...
	I0520 10:22:09.491959   12352 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 10:22:09.492029   12352 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 10:22:09.500626   12352 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 10:22:09.911390   12352 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 10:22:10.318819   12352 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 10:22:10.463944   12352 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 10:22:10.823134   12352 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 10:22:10.823372   12352 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-807933 localhost] and IPs [192.168.39.86 127.0.0.1 ::1]
	I0520 10:22:10.943427   12352 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 10:22:10.943616   12352 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-807933 localhost] and IPs [192.168.39.86 127.0.0.1 ::1]
	I0520 10:22:11.044533   12352 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 10:22:11.143164   12352 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 10:22:11.393888   12352 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 10:22:11.394065   12352 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 10:22:11.508659   12352 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 10:22:11.625588   12352 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 10:22:11.868433   12352 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 10:22:12.055937   12352 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 10:22:12.265190   12352 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 10:22:12.265898   12352 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 10:22:12.270025   12352 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 10:22:12.271917   12352 out.go:204]   - Booting up control plane ...
	I0520 10:22:12.272014   12352 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 10:22:12.272107   12352 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 10:22:12.272163   12352 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 10:22:12.288002   12352 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 10:22:12.288136   12352 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 10:22:12.288184   12352 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 10:22:12.406687   12352 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 10:22:12.406785   12352 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 10:22:13.408710   12352 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.003056036s
	I0520 10:22:13.408826   12352 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 10:22:17.908640   12352 kubeadm.go:309] [api-check] The API server is healthy after 4.502208626s
	I0520 10:22:17.922452   12352 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 10:22:17.947167   12352 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 10:22:17.975764   12352 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 10:22:17.975982   12352 kubeadm.go:309] [mark-control-plane] Marking the node addons-807933 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 10:22:17.988329   12352 kubeadm.go:309] [bootstrap-token] Using token: g6iqut.p8aqjgcw9foiwg7c
	I0520 10:22:17.989761   12352 out.go:204]   - Configuring RBAC rules ...
	I0520 10:22:17.989911   12352 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 10:22:17.995026   12352 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 10:22:18.005577   12352 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 10:22:18.013485   12352 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 10:22:18.017946   12352 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 10:22:18.021627   12352 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 10:22:18.317607   12352 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 10:22:18.766458   12352 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 10:22:19.315532   12352 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 10:22:19.315554   12352 kubeadm.go:309] 
	I0520 10:22:19.315620   12352 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 10:22:19.315629   12352 kubeadm.go:309] 
	I0520 10:22:19.315764   12352 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 10:22:19.315792   12352 kubeadm.go:309] 
	I0520 10:22:19.315837   12352 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 10:22:19.315917   12352 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 10:22:19.315988   12352 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 10:22:19.315998   12352 kubeadm.go:309] 
	I0520 10:22:19.316111   12352 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 10:22:19.316130   12352 kubeadm.go:309] 
	I0520 10:22:19.316197   12352 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 10:22:19.316209   12352 kubeadm.go:309] 
	I0520 10:22:19.316289   12352 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 10:22:19.316389   12352 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 10:22:19.316491   12352 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 10:22:19.316501   12352 kubeadm.go:309] 
	I0520 10:22:19.316624   12352 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 10:22:19.316764   12352 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 10:22:19.316775   12352 kubeadm.go:309] 
	I0520 10:22:19.316889   12352 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token g6iqut.p8aqjgcw9foiwg7c \
	I0520 10:22:19.317066   12352 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 10:22:19.317103   12352 kubeadm.go:309] 	--control-plane 
	I0520 10:22:19.317114   12352 kubeadm.go:309] 
	I0520 10:22:19.317234   12352 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 10:22:19.317246   12352 kubeadm.go:309] 
	I0520 10:22:19.317357   12352 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token g6iqut.p8aqjgcw9foiwg7c \
	I0520 10:22:19.317497   12352 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 10:22:19.317699   12352 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 10:22:19.317792   12352 cni.go:84] Creating CNI manager for ""
	I0520 10:22:19.317810   12352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 10:22:19.319550   12352 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 10:22:19.320758   12352 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 10:22:19.331716   12352 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 10:22:19.354330   12352 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 10:22:19.354424   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:19.354469   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-807933 minikube.k8s.io/updated_at=2024_05_20T10_22_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=addons-807933 minikube.k8s.io/primary=true
	I0520 10:22:19.387027   12352 ops.go:34] apiserver oom_adj: -16
	I0520 10:22:19.500165   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:20.000216   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:20.500243   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:21.000961   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:21.501052   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:22.000376   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:22.500339   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:23.000816   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:23.500724   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:24.001220   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:24.501184   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:25.000238   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:25.500972   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:26.000661   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:26.500849   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:27.000235   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:27.500859   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:28.000338   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:28.500484   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:29.001352   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:29.500913   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:30.001189   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:30.500882   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:31.000826   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:31.500267   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:32.000991   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:32.501076   12352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:22:32.584586   12352 kubeadm.go:1107] duration metric: took 13.230228729s to wait for elevateKubeSystemPrivileges
	W0520 10:22:32.584624   12352 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 10:22:32.584635   12352 kubeadm.go:393] duration metric: took 23.856999493s to StartCluster
	I0520 10:22:32.584657   12352 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:32.584801   12352 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:22:32.585261   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:22:32.585460   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 10:22:32.585493   12352 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:22:32.587599   12352 out.go:177] * Verifying Kubernetes components...
	I0520 10:22:32.585548   12352 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0520 10:22:32.585664   12352 config.go:182] Loaded profile config "addons-807933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:22:32.588989   12352 addons.go:69] Setting cloud-spanner=true in profile "addons-807933"
	I0520 10:22:32.588999   12352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:22:32.589010   12352 addons.go:69] Setting inspektor-gadget=true in profile "addons-807933"
	I0520 10:22:32.589026   12352 addons.go:69] Setting volumesnapshots=true in profile "addons-807933"
	I0520 10:22:32.589047   12352 addons.go:234] Setting addon cloud-spanner=true in "addons-807933"
	I0520 10:22:32.589058   12352 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-807933"
	I0520 10:22:32.589006   12352 addons.go:69] Setting metrics-server=true in profile "addons-807933"
	I0520 10:22:32.589079   12352 addons.go:69] Setting ingress=true in profile "addons-807933"
	I0520 10:22:32.589090   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.589097   12352 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-807933"
	I0520 10:22:32.589089   12352 addons.go:69] Setting gcp-auth=true in profile "addons-807933"
	I0520 10:22:32.589107   12352 addons.go:234] Setting addon metrics-server=true in "addons-807933"
	I0520 10:22:32.589093   12352 addons.go:69] Setting storage-provisioner=true in profile "addons-807933"
	I0520 10:22:32.589118   12352 addons.go:234] Setting addon ingress=true in "addons-807933"
	I0520 10:22:32.589126   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.589133   12352 mustload.go:65] Loading cluster: addons-807933
	I0520 10:22:32.589117   12352 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-807933"
	I0520 10:22:32.589118   12352 addons.go:69] Setting ingress-dns=true in profile "addons-807933"
	I0520 10:22:32.589149   12352 addons.go:234] Setting addon storage-provisioner=true in "addons-807933"
	I0520 10:22:32.589153   12352 addons.go:69] Setting registry=true in profile "addons-807933"
	I0520 10:22:32.589417   12352 addons.go:234] Setting addon registry=true in "addons-807933"
	I0520 10:22:32.589491   12352 addons.go:69] Setting default-storageclass=true in profile "addons-807933"
	I0520 10:22:32.589527   12352 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-807933"
	I0520 10:22:32.589585   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.589016   12352 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-807933"
	I0520 10:22:32.589050   12352 addons.go:234] Setting addon volumesnapshots=true in "addons-807933"
	I0520 10:22:32.589692   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.589685   12352 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-807933"
	I0520 10:22:32.589430   12352 addons.go:234] Setting addon ingress-dns=true in "addons-807933"
	I0520 10:22:32.589760   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.589496   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.590010   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.590051   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.590061   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.590089   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.589058   12352 addons.go:234] Setting addon inspektor-gadget=true in "addons-807933"
	I0520 10:22:32.590132   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.589143   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.590151   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.589154   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.590260   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.590302   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.590370   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.590397   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.589377   12352 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-807933"
	I0520 10:22:32.590523   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.590531   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.590548   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.590563   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.589068   12352 addons.go:69] Setting helm-tiller=true in profile "addons-807933"
	I0520 10:22:32.588996   12352 addons.go:69] Setting yakd=true in profile "addons-807933"
	I0520 10:22:32.590614   12352 addons.go:234] Setting addon helm-tiller=true in "addons-807933"
	I0520 10:22:32.589645   12352 config.go:182] Loaded profile config "addons-807933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:22:32.590636   12352 addons.go:234] Setting addon yakd=true in "addons-807933"
	I0520 10:22:32.591094   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.591138   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.591183   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.591215   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.591436   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.591215   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.591678   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.591713   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.592078   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.592270   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.592324   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.592340   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.592960   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.593029   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.610909   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I0520 10:22:32.612339   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I0520 10:22:32.613293   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36027
	I0520 10:22:32.613455   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0520 10:22:32.613829   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.613891   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.613911   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.613843   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
	I0520 10:22:32.613852   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45281
	I0520 10:22:32.613869   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.614575   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.614601   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.614647   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.614672   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.614747   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.614835   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.614850   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.614943   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.615009   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.615044   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.615280   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.615302   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.615467   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.615495   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.615586   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.615813   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.615854   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.615926   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.615936   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.615949   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.615971   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.616025   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.616103   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.616149   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.616235   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.617676   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.617718   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.627681   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.627736   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.627787   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.627824   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.628140   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.628540   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.628571   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.628756   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.628781   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.628838   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.634540   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.634583   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.638525   12352 addons.go:234] Setting addon default-storageclass=true in "addons-807933"
	I0520 10:22:32.638601   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.639011   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.639194   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.647306   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0520 10:22:32.647751   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.648276   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.648295   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.648702   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.649353   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.649382   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.650768   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0520 10:22:32.651120   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.651593   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.651614   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.651922   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.652085   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.653923   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I0520 10:22:32.654108   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.656217   12352 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0520 10:22:32.654507   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.657511   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0520 10:22:32.657968   12352 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0520 10:22:32.657981   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0520 10:22:32.657998   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.659129   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.659148   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.659493   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.659687   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.660222   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.660259   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.660456   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0520 10:22:32.660612   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0520 10:22:32.661251   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.661341   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.661777   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.661800   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.661859   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.661876   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.661989   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.662151   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.662298   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.662415   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.662687   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.663146   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.663164   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.663437   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.663452   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.663848   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.664169   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.664278   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.664449   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.664616   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.664673   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.665116   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.666568   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.668481   12352 out.go:177]   - Using image docker.io/registry:2.8.3
	I0520 10:22:32.667798   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I0520 10:22:32.668060   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.669936   12352 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0520 10:22:32.671487   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0520 10:22:32.670381   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.671435   12352 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0520 10:22:32.672615   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0520 10:22:32.672633   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.674187   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0520 10:22:32.673194   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.675447   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.676789   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0520 10:22:32.675963   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.676955   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.677400   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I0520 10:22:32.677683   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.678139   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0520 10:22:32.678277   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.678447   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.678863   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.679713   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.679890   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.679901   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0520 10:22:32.679229   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
	I0520 10:22:32.680115   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.681131   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0520 10:22:32.682271   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0520 10:22:32.681332   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.681558   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.681906   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.684505   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0520 10:22:32.685651   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0520 10:22:32.685667   12352 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0520 10:22:32.685684   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.684577   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.685740   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.684398   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.685779   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.685934   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
	I0520 10:22:32.686121   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.686302   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.687471   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.687511   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.688009   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.688033   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.688251   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.688287   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.688370   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.688546   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.689237   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.691012   12352 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0520 10:22:32.692806   12352 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 10:22:32.692831   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0520 10:22:32.692850   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.690625   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.692944   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.692974   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.690721   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.691185   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.691777   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I0520 10:22:32.693259   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.693599   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.693795   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.694079   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.694178   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33825
	I0520 10:22:32.695630   12352 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0
	I0520 10:22:32.694643   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.694677   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.696650   12352 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0520 10:22:32.696672   12352 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0520 10:22:32.696701   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.696754   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.696895   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38499
	I0520 10:22:32.697197   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.697219   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.697288   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.697630   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.697762   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.697778   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.697826   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.697833   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.698381   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.698421   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.698651   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.698658   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.698671   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.698697   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.698800   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.699042   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.699069   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.699107   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.699139   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.699160   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.699170   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.700145   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.700507   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.700531   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.700650   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.700795   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.700895   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.701003   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.705864   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44033
	I0520 10:22:32.706290   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.707168   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.707183   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.707519   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.707698   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.709871   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.712181   12352 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 10:22:32.711595   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37637
	I0520 10:22:32.711768   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38161
	I0520 10:22:32.713605   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I0520 10:22:32.714528   12352 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:22:32.714811   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 10:22:32.714830   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.715624   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.715939   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.716046   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.716535   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.716551   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.716688   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.716699   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.716808   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.716818   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.717195   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.717250   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0520 10:22:32.717345   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.717592   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.717653   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.717716   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.717801   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0520 10:22:32.717909   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.718508   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.718516   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.718654   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.718670   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.719067   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.719085   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.719153   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34569
	I0520 10:22:32.719595   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.719783   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.719878   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.720278   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.720837   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.720893   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.720905   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.721127   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.722651   12352 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0520 10:22:32.723803   12352 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0520 10:22:32.723816   12352 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0520 10:22:32.723829   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.723058   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.721833   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.721723   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.723090   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.723938   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.723964   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.723306   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.723506   12352 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-807933"
	I0520 10:22:32.724038   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:32.725967   12352 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0520 10:22:32.724538   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.724671   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.725433   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.726833   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40341
	I0520 10:22:32.727142   12352 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 10:22:32.727171   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.727282   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.727816   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.728296   12352 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0520 10:22:32.729522   12352 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 10:22:32.728340   12352 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 10:22:32.728408   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.728413   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.728561   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.728648   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.728763   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.729623   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.729665   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0520 10:22:32.729682   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.729720   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.730112   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.730126   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.730182   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.730218   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.730326   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.730760   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.731115   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.733506   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.734269   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.734575   12352 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 10:22:32.734587   12352 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 10:22:32.734604   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.734810   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.735553   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0520 10:22:32.735566   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.735602   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.735619   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.735620   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0520 10:22:32.735730   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.735748   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.735752   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.735878   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.735934   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.736142   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.736228   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.736280   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.736341   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.736677   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.736694   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.736910   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.737076   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.737321   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.737506   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.737556   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.737758   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.737780   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.737830   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.737857   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.738060   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.738074   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.738208   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.738228   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.738343   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.738545   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.739487   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.741898   12352 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0520 10:22:32.741234   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.743116   12352 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0520 10:22:32.743126   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0520 10:22:32.743140   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.744557   12352 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:22:32.746224   12352 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0520 10:22:32.745851   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.746414   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.748695   12352 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:22:32.748750   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.750013   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.748881   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.750189   12352 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 10:22:32.750202   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0520 10:22:32.750218   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.750289   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.750420   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.751487   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35445
	I0520 10:22:32.751865   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.752595   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.752667   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.753502   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.753561   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.754082   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:32.754113   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:32.754331   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39649
	I0520 10:22:32.754348   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.754390   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.754410   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.754522   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.754659   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.754773   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.769104   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34913
	I0520 10:22:32.769412   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.770308   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.770336   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.770601   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.770778   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.772196   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.773981   12352 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0520 10:22:32.775171   12352 out.go:177]   - Using image docker.io/busybox:stable
	I0520 10:22:32.776400   12352 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 10:22:32.776415   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0520 10:22:32.776434   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.777420   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:32.777942   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:32.777963   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:32.778336   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:32.778567   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:32.779554   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.779935   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.779967   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.779978   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:32.780117   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.781457   12352 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0520 10:22:32.780293   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.782618   12352 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0520 10:22:32.782631   12352 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0520 10:22:32.782645   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:32.782779   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.783308   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:32.785565   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.785985   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:32.786044   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:32.786154   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:32.786305   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:32.786434   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:32.786571   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	W0520 10:22:32.788414   12352 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49554->192.168.39.86:22: read: connection reset by peer
	I0520 10:22:32.788442   12352 retry.go:31] will retry after 133.152417ms: ssh: handshake failed: read tcp 192.168.39.1:49554->192.168.39.86:22: read: connection reset by peer
	W0520 10:22:32.788501   12352 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49558->192.168.39.86:22: read: connection reset by peer
	I0520 10:22:32.788511   12352 retry.go:31] will retry after 160.532242ms: ssh: handshake failed: read tcp 192.168.39.1:49558->192.168.39.86:22: read: connection reset by peer
	W0520 10:22:32.922369   12352 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49570->192.168.39.86:22: read: connection reset by peer
	I0520 10:22:32.922403   12352 retry.go:31] will retry after 206.830679ms: ssh: handshake failed: read tcp 192.168.39.1:49570->192.168.39.86:22: read: connection reset by peer
	I0520 10:22:33.117706   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0520 10:22:33.120159   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 10:22:33.135394   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:22:33.158944   12352 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0520 10:22:33.158969   12352 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0520 10:22:33.190375   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 10:22:33.210227   12352 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 10:22:33.210254   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0520 10:22:33.233955   12352 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0520 10:22:33.233988   12352 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0520 10:22:33.264702   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0520 10:22:33.264727   12352 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0520 10:22:33.329426   12352 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0520 10:22:33.329451   12352 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0520 10:22:33.339396   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 10:22:33.345731   12352 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0520 10:22:33.345753   12352 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0520 10:22:33.345964   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 10:22:33.387851   12352 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0520 10:22:33.387868   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0520 10:22:33.393025   12352 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 10:22:33.393044   12352 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 10:22:33.394478   12352 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0520 10:22:33.394491   12352 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0520 10:22:33.408530   12352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:22:33.408591   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 10:22:33.516030   12352 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0520 10:22:33.516050   12352 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0520 10:22:33.527100   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0520 10:22:33.527118   12352 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0520 10:22:33.549292   12352 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 10:22:33.549315   12352 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0520 10:22:33.565829   12352 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0520 10:22:33.565856   12352 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0520 10:22:33.604854   12352 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0520 10:22:33.604880   12352 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0520 10:22:33.611697   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0520 10:22:33.629008   12352 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 10:22:33.629025   12352 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 10:22:33.675724   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0520 10:22:33.675741   12352 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0520 10:22:33.721095   12352 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0520 10:22:33.721120   12352 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0520 10:22:33.739098   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 10:22:33.769929   12352 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0520 10:22:33.769960   12352 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0520 10:22:33.780537   12352 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0520 10:22:33.780555   12352 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0520 10:22:33.803665   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0520 10:22:33.812642   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 10:22:33.913251   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0520 10:22:33.913275   12352 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0520 10:22:33.936530   12352 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0520 10:22:33.936561   12352 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0520 10:22:33.975754   12352 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0520 10:22:33.975795   12352 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0520 10:22:34.022207   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0520 10:22:34.022236   12352 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0520 10:22:34.090872   12352 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0520 10:22:34.090894   12352 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0520 10:22:34.110464   12352 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0520 10:22:34.110483   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0520 10:22:34.252052   12352 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0520 10:22:34.252079   12352 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0520 10:22:34.271402   12352 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:22:34.271422   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0520 10:22:34.314839   12352 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0520 10:22:34.314868   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0520 10:22:34.414343   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:22:34.445921   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0520 10:22:34.477791   12352 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0520 10:22:34.477829   12352 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0520 10:22:34.512361   12352 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 10:22:34.512381   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0520 10:22:34.698878   12352 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0520 10:22:34.698902   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0520 10:22:34.944166   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 10:22:35.179122   12352 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0520 10:22:35.179144   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0520 10:22:35.578482   12352 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 10:22:35.578514   12352 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0520 10:22:35.769379   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.651632541s)
	I0520 10:22:35.769420   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:35.769417   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.649232846s)
	I0520 10:22:35.769434   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:35.769457   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:35.769474   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:35.769740   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:35.769767   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:35.769775   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:35.769780   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:35.769790   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:35.769799   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:35.769864   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:35.769875   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:35.769883   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:35.769913   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:35.770010   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:35.770052   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:35.770065   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:35.770228   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:35.770243   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:35.909675   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 10:22:38.379175   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.24374315s)
	I0520 10:22:38.379230   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:38.379243   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:38.379532   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:38.379557   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:38.379567   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:38.379575   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:38.379615   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:38.379863   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:38.379886   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:39.758535   12352 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0520 10:22:39.758569   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:39.762466   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:39.762942   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:39.762983   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:39.763166   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:39.763376   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:39.763549   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:39.763697   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:40.289670   12352 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0520 10:22:40.538788   12352 addons.go:234] Setting addon gcp-auth=true in "addons-807933"
	I0520 10:22:40.538855   12352 host.go:66] Checking if "addons-807933" exists ...
	I0520 10:22:40.539291   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:40.539339   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:40.554395   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36717
	I0520 10:22:40.554820   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:40.555399   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:40.555424   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:40.555780   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:40.556230   12352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:22:40.556254   12352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:22:40.571870   12352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
	I0520 10:22:40.572326   12352 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:22:40.572804   12352 main.go:141] libmachine: Using API Version  1
	I0520 10:22:40.572828   12352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:22:40.573129   12352 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:22:40.573307   12352 main.go:141] libmachine: (addons-807933) Calling .GetState
	I0520 10:22:40.575180   12352 main.go:141] libmachine: (addons-807933) Calling .DriverName
	I0520 10:22:40.575403   12352 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0520 10:22:40.575431   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHHostname
	I0520 10:22:40.578239   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:40.578650   12352 main.go:141] libmachine: (addons-807933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:70:6f", ip: ""} in network mk-addons-807933: {Iface:virbr1 ExpiryTime:2024-05-20 11:21:51 +0000 UTC Type:0 Mac:52:54:00:ab:70:6f Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:addons-807933 Clientid:01:52:54:00:ab:70:6f}
	I0520 10:22:40.578678   12352 main.go:141] libmachine: (addons-807933) DBG | domain addons-807933 has defined IP address 192.168.39.86 and MAC address 52:54:00:ab:70:6f in network mk-addons-807933
	I0520 10:22:40.578854   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHPort
	I0520 10:22:40.579006   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHKeyPath
	I0520 10:22:40.579133   12352 main.go:141] libmachine: (addons-807933) Calling .GetSSHUsername
	I0520 10:22:40.579239   12352 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/addons-807933/id_rsa Username:docker}
	I0520 10:22:41.465716   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.275298427s)
	I0520 10:22:41.465753   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.126328183s)
	I0520 10:22:41.465769   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.465783   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.465796   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.465805   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.465802   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.119810706s)
	I0520 10:22:41.465830   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.465839   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.465861   12352 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.05725306s)
	I0520 10:22:41.465882   12352 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.057325987s)
	I0520 10:22:41.465917   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.854195453s)
	I0520 10:22:41.465938   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.465950   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466041   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.726923341s)
	I0520 10:22:41.466065   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466074   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466141   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.662453666s)
	I0520 10:22:41.466158   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466167   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466213   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.466247   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466255   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466255   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.653590256s)
	I0520 10:22:41.466263   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466273   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466282   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466283   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466290   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466319   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466343   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466353   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466362   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466369   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466442   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.052066798s)
	W0520 10:22:41.466469   12352 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 10:22:41.466500   12352 retry.go:31] will retry after 219.080102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 10:22:41.466552   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466560   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.020608424s)
	I0520 10:22:41.466565   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466577   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466588   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466607   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466614   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466699   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.522504263s)
	I0520 10:22:41.466717   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466726   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466744   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466809   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.466834   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466841   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466850   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466857   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466872   12352 node_ready.go:35] waiting up to 6m0s for node "addons-807933" to be "Ready" ...
	I0520 10:22:41.466895   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.466913   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466920   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466927   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.466932   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.466954   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.466973   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.466987   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.466996   12352 addons.go:470] Verifying addon registry=true in "addons-807933"
	I0520 10:22:41.465879   12352 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 10:22:41.469564   12352 out.go:177] * Verifying registry addon...
	I0520 10:22:41.466933   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.467307   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.467346   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.467359   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.467374   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.467993   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.468010   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.468047   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.468071   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.468968   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.468983   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.469730   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.469746   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.469981   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.470010   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.471422   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471438   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471480   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471508   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471524   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.471535   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.472913   12352 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-807933 service yakd-dashboard -n yakd-dashboard
	
	I0520 10:22:41.471590   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471609   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471501   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.471836   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.471861   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.471881   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.471909   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.472598   12352 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0520 10:22:41.474179   12352 addons.go:470] Verifying addon ingress=true in "addons-807933"
	I0520 10:22:41.475608   12352 out.go:177] * Verifying ingress addon...
	I0520 10:22:41.474240   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.474253   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.475688   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.474256   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.474265   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.475787   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.475764   12352 addons.go:470] Verifying addon metrics-server=true in "addons-807933"
	I0520 10:22:41.475927   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.477197   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.475956   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.476087   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.477355   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.476120   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.477805   12352 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0520 10:22:41.485843   12352 node_ready.go:49] node "addons-807933" has status "Ready":"True"
	I0520 10:22:41.485858   12352 node_ready.go:38] duration metric: took 18.970366ms for node "addons-807933" to be "Ready" ...
	I0520 10:22:41.485867   12352 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:22:41.508747   12352 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0520 10:22:41.508771   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:41.509070   12352 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0520 10:22:41.509094   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:41.534458   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.534477   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.534731   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.534745   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	W0520 10:22:41.534863   12352 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0520 10:22:41.544283   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:41.544301   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:41.544623   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:41.544582   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:41.544692   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:41.559560   12352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b4ffw" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.601936   12352 pod_ready.go:92] pod "coredns-7db6d8ff4d-b4ffw" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:41.601959   12352 pod_ready.go:81] duration metric: took 42.368414ms for pod "coredns-7db6d8ff4d-b4ffw" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.601969   12352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mq6st" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.625060   12352 pod_ready.go:92] pod "coredns-7db6d8ff4d-mq6st" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:41.625080   12352 pod_ready.go:81] duration metric: took 23.104459ms for pod "coredns-7db6d8ff4d-mq6st" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.625089   12352 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.641186   12352 pod_ready.go:92] pod "etcd-addons-807933" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:41.641210   12352 pod_ready.go:81] duration metric: took 16.112316ms for pod "etcd-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.641222   12352 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.686196   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:22:41.724525   12352 pod_ready.go:92] pod "kube-apiserver-addons-807933" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:41.724551   12352 pod_ready.go:81] duration metric: took 83.321214ms for pod "kube-apiserver-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.724564   12352 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.870202   12352 pod_ready.go:92] pod "kube-controller-manager-addons-807933" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:41.870234   12352 pod_ready.go:81] duration metric: took 145.662391ms for pod "kube-controller-manager-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.870249   12352 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-twglf" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:41.970135   12352 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-807933" context rescaled to 1 replicas
	I0520 10:22:41.979239   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:41.981548   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:42.271296   12352 pod_ready.go:92] pod "kube-proxy-twglf" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:42.271326   12352 pod_ready.go:81] duration metric: took 401.06798ms for pod "kube-proxy-twglf" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:42.271338   12352 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:42.484580   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:42.500540   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:42.695564   12352 pod_ready.go:92] pod "kube-scheduler-addons-807933" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:42.695584   12352 pod_ready.go:81] duration metric: took 424.238743ms for pod "kube-scheduler-addons-807933" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:42.695593   12352 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:42.724721   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.814984356s)
	I0520 10:22:42.724756   12352 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.149325927s)
	I0520 10:22:42.724772   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:42.724787   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:42.726163   12352 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:22:42.725037   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:42.725067   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:42.727505   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:42.727520   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:42.727530   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:42.728730   12352 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0520 10:22:42.729870   12352 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0520 10:22:42.729883   12352 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0520 10:22:42.727856   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:42.729964   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:42.729988   12352 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-807933"
	I0520 10:22:42.727886   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:42.731320   12352 out.go:177] * Verifying csi-hostpath-driver addon...
	I0520 10:22:42.733348   12352 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0520 10:22:42.773943   12352 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0520 10:22:42.773986   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:42.879719   12352 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0520 10:22:42.879745   12352 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0520 10:22:42.975288   12352 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 10:22:42.975315   12352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0520 10:22:42.981132   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:42.984485   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:43.040168   12352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 10:22:43.251940   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:43.484740   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:43.489828   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:43.738938   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:43.982229   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:43.983304   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:43.994771   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.308534911s)
	I0520 10:22:43.994819   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:43.994834   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:43.995120   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:43.995169   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:43.995185   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:43.995194   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:43.995203   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:43.995494   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:43.995535   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:44.239430   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:44.529659   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:44.530277   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:44.657226   12352 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.617014007s)
	I0520 10:22:44.657287   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:44.657305   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:44.657609   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:44.657675   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:44.657704   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:44.657717   12352 main.go:141] libmachine: Making call to close driver server
	I0520 10:22:44.657725   12352 main.go:141] libmachine: (addons-807933) Calling .Close
	I0520 10:22:44.657940   12352 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:22:44.658009   12352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:22:44.657967   12352 main.go:141] libmachine: (addons-807933) DBG | Closing plugin on server side
	I0520 10:22:44.659861   12352 addons.go:470] Verifying addon gcp-auth=true in "addons-807933"
	I0520 10:22:44.661258   12352 out.go:177] * Verifying gcp-auth addon...
	I0520 10:22:44.663199   12352 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0520 10:22:44.686171   12352 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0520 10:22:44.686191   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:44.725124   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:44.753198   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:44.979455   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:44.982723   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:45.166734   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:45.240027   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:45.480092   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:45.482343   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:45.666100   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:45.737944   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:45.979360   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:45.981956   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:46.166635   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:46.239342   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:46.479465   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:46.481618   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:46.666791   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:46.743167   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:47.110218   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:47.110228   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:47.167132   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:47.201134   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:47.239200   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:47.479034   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:47.482045   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:47.668111   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:47.739035   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:47.978894   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:47.981598   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:48.167381   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:48.239524   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:48.479144   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:48.482156   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:48.667295   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:48.739783   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:48.978761   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:48.981927   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:49.171980   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:49.201337   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:49.239619   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:49.480782   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:49.482043   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:49.668055   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:49.739043   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:49.980820   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:49.985454   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:50.167643   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:50.241918   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:50.478781   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:50.481600   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:50.667221   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:50.740384   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:50.979682   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:50.981826   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:51.166965   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:51.239887   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:51.478225   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:51.481897   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:51.666859   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:51.701823   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:51.737839   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:51.979123   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:51.982076   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:52.167154   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:52.238329   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:52.479494   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:52.481888   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:52.667102   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:52.738647   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:52.980053   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:52.982183   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:53.169738   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:53.239797   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:53.480493   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:53.489412   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:53.667382   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:53.703306   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:53.740013   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:53.981206   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:53.983346   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:54.167063   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:54.238482   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:54.480274   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:54.482461   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:54.667565   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:54.740043   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:54.980884   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:54.985512   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:55.166631   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:55.238273   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:55.481443   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:55.483677   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:55.667239   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:55.738477   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:55.979513   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:55.981372   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:56.167373   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:56.201876   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:56.239033   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:56.479602   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:56.482929   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:56.668118   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:56.739686   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:56.980412   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:56.982208   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:57.171607   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:57.238535   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:57.479594   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:57.483253   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:57.667704   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:57.749859   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:57.978925   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:57.981796   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:58.167356   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:58.238766   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:58.478897   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:58.481719   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:58.668671   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:58.705967   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:58.738468   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:58.979738   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:58.981844   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:59.166589   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:59.239316   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:59.480150   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:59.484349   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:59.667328   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:59.738320   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:59.979593   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:22:59.981674   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:00.166829   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:00.240092   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:00.479517   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:00.482639   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:00.667598   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:00.738827   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:00.979503   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:00.982103   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:01.168338   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:01.201955   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:01.241515   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:01.478758   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:01.481913   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:01.667276   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:01.739623   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:01.978485   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:01.981394   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:02.167305   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:02.240508   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:02.479430   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:02.482088   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:02.667830   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:02.738987   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:02.979859   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:02.982795   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:03.167471   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:03.203230   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:03.241622   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:03.479197   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:03.481651   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:03.668778   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:03.740761   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:03.978165   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:03.982020   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:04.167600   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:04.238311   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:04.480871   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:04.482340   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:04.667220   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:04.739742   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:04.997807   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:04.998243   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:05.167289   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:05.238263   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:05.759663   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:05.759742   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:05.761525   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:05.762276   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:05.764312   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:05.978841   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:05.982345   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:06.167826   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:06.238242   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:06.479485   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:06.482099   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:06.667431   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:06.738764   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:06.979060   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:06.983469   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:07.168989   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:07.238805   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:07.478534   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:07.481363   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:07.667770   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:07.739153   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:07.979291   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:07.981556   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:08.166497   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:08.201748   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:08.242437   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:08.483324   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:08.483343   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:08.668184   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:08.737906   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:08.978524   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:08.981225   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:09.167158   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:09.239871   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:09.478699   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:09.481135   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:09.666896   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:09.738985   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:09.979685   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:09.988277   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:10.167321   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:10.239769   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:10.479783   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:10.483417   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:10.668016   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:10.700714   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:10.738094   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:10.980718   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:10.985372   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:11.167271   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:11.242604   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:11.479334   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:11.482698   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:11.667553   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:11.742304   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:11.979371   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:11.982513   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:12.167418   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:12.244207   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:12.479818   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:12.481930   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:12.668556   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:12.703022   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:12.740499   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:12.978958   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:12.983157   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:13.167500   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:13.239263   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:13.481367   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:13.482424   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:13.667359   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:13.738678   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:13.980672   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:13.982751   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:14.167024   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:14.239293   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:14.479327   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:14.481951   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:14.666924   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:14.703506   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:14.743002   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:14.980119   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:14.983759   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:15.166620   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:15.239603   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:15.479562   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:15.484668   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:15.667419   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:15.738504   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:15.980115   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:15.983208   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:16.169092   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:16.239991   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:16.478822   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:16.481577   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:16.667816   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:16.704561   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:16.740846   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:16.978939   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:16.981459   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:17.166801   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:17.238861   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:17.480695   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:17.482746   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:17.666827   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:17.738990   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:17.979533   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:17.982829   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:18.168334   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:18.239033   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:18.752632   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:18.752776   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:18.753364   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:18.756514   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:18.757069   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:18.979483   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:18.982318   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:19.167327   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:19.239159   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:19.479577   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:19.482907   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:19.666308   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:19.753670   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:19.979262   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:19.982103   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:20.167121   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:20.239582   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:20.480080   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:20.483352   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:20.667290   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:20.742021   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:20.979061   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:20.981839   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:21.167082   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:21.209252   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:21.242578   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:21.479930   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:21.484868   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:21.667871   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:21.739264   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:21.979088   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:21.982174   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:22.169691   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:22.241802   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:22.478903   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:22.481974   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:22.667264   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:22.746759   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:22.979920   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:22.982330   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:23.167991   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:23.239453   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:23.480495   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:23.482927   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:23.667604   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:23.701908   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:23.739101   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:23.978936   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:23.982120   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:24.167077   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:24.239750   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:24.478397   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:24.480949   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:24.667020   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:24.742354   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:24.982937   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:24.988860   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:25.166976   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:25.238147   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:25.478929   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:25.481768   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:25.666302   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:25.739359   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:25.979326   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:25.981591   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:26.167827   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:26.200798   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:26.238645   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:26.479657   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:26.482419   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:26.668870   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:26.749954   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:26.980732   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:26.983253   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:27.168167   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:27.241227   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:27.480413   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:27.482382   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:27.667571   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:27.739108   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:27.978955   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:27.981278   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:28.167444   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:28.202188   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:28.241747   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:28.479522   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:28.482377   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:28.903376   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:28.904400   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:28.991109   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:28.993292   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:29.168820   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:29.238887   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:29.478478   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:29.481966   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:29.667390   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:29.738509   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:29.979209   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:29.982434   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:30.167256   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:30.238624   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:30.478901   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:30.481624   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:30.671313   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:30.706224   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:30.738854   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:30.979228   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:30.981336   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:31.167330   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:31.239129   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:31.478744   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:31.481141   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:31.667194   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:31.738462   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:31.980050   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:31.988691   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:32.168175   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:32.238533   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:32.479258   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:23:32.482271   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:32.667418   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:32.738697   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:32.979722   12352 kapi.go:107] duration metric: took 51.507119416s to wait for kubernetes.io/minikube-addons=registry ...
	I0520 10:23:32.982214   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:33.167676   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:33.202214   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:33.238869   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:33.482341   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:33.667098   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:33.737876   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:33.982589   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:34.167558   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:34.238495   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:34.482253   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:34.667638   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:34.738266   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:34.982517   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:35.168169   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:35.238626   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:35.482190   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:35.669319   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:35.700802   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:35.737977   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:35.985599   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:36.167172   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:36.241755   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:36.483506   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:36.667177   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:36.739242   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:36.982955   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:37.166978   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:37.240754   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:37.482284   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:37.666991   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:37.715101   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:37.743173   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:37.982634   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:38.168065   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:38.237822   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:38.932611   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:38.932995   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:38.936577   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:38.984194   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:39.169994   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:39.239324   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:39.484512   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:39.668697   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:39.741355   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:39.982199   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:40.167788   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:40.202233   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:40.242781   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:40.481746   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:40.667657   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:40.738321   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:40.983030   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:41.166813   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:41.241471   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:41.482255   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:41.668516   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:41.739575   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:41.982689   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:42.167507   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:42.238728   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:42.483020   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:42.667672   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:42.704475   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:42.741067   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:42.982398   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:43.167168   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:43.239945   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:43.482597   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:43.667566   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:43.739008   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:43.981859   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:44.167023   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:44.241156   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:44.482977   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:44.666840   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:44.738131   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:44.985374   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:45.168537   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:45.202980   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:45.250978   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:45.490567   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:45.667489   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:45.738725   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:46.014220   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:46.172093   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:46.241057   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:46.482561   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:46.672646   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:46.741727   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:46.989517   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:47.168618   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:47.238989   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:47.482414   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:47.668761   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:47.702491   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:47.738333   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:47.982562   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:48.167441   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:48.238927   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:48.488203   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:48.667245   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:48.739314   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:48.983025   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:49.167603   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:49.239128   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:49.483262   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:49.673758   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:49.743401   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:49.982998   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:50.166746   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:50.202419   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:50.240310   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:50.483054   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:50.666964   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:50.739262   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:50.982170   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:51.167476   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:51.238946   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:51.483437   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:51.667279   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:51.739216   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:51.982683   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:52.167176   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:52.239263   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:52.482483   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:52.676679   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:52.704153   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:53.166493   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:53.174874   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:53.174900   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:53.241236   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:53.482232   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:53.666824   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:53.738907   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:53.984323   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:54.167083   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:54.243029   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:54.482469   12352 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:23:54.667002   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:54.739115   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:54.982590   12352 kapi.go:107] duration metric: took 1m13.504781188s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0520 10:23:55.166917   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:55.201059   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:55.241449   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:55.666758   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:55.746161   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:56.168208   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:56.239376   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:56.667304   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:56.738492   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:57.167365   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:57.203555   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:57.239673   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:57.667190   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:57.738137   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:58.167593   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:58.244116   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:58.675261   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:23:58.744743   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:59.168893   12352 kapi.go:107] duration metric: took 1m14.505690445s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0520 10:23:59.170866   12352 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-807933 cluster.
	I0520 10:23:59.172220   12352 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0520 10:23:59.173661   12352 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0520 10:23:59.204413   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:23:59.244414   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:23:59.738872   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:00.238693   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:00.738149   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:01.332535   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:01.335781   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:01.740992   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:02.238063   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:02.741285   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:03.241486   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:03.705088   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:03.739004   12352 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:24:04.239766   12352 kapi.go:107] duration metric: took 1m21.506417837s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0520 10:24:04.241568   12352 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, yakd, helm-tiller, metrics-server, inspektor-gadget, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0520 10:24:04.242887   12352 addons.go:505] duration metric: took 1m31.657343198s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns yakd helm-tiller metrics-server inspektor-gadget default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0520 10:24:06.203711   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:08.703279   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:11.201791   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:13.202301   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:15.701153   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:17.701468   12352 pod_ready.go:102] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"False"
	I0520 10:24:18.702834   12352 pod_ready.go:92] pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace has status "Ready":"True"
	I0520 10:24:18.702862   12352 pod_ready.go:81] duration metric: took 1m36.007261506s for pod "metrics-server-c59844bb4-8j5ls" in "kube-system" namespace to be "Ready" ...
	I0520 10:24:18.702876   12352 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2bwlq" in "kube-system" namespace to be "Ready" ...
	I0520 10:24:18.716651   12352 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-2bwlq" in "kube-system" namespace has status "Ready":"True"
	I0520 10:24:18.716674   12352 pod_ready.go:81] duration metric: took 13.790589ms for pod "nvidia-device-plugin-daemonset-2bwlq" in "kube-system" namespace to be "Ready" ...
	I0520 10:24:18.716699   12352 pod_ready.go:38] duration metric: took 1m37.230821368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:24:18.716721   12352 api_server.go:52] waiting for apiserver process to appear ...
	I0520 10:24:18.716756   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 10:24:18.716815   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 10:24:18.767995   12352 cri.go:89] found id: "f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75"
	I0520 10:24:18.768022   12352 cri.go:89] found id: ""
	I0520 10:24:18.768039   12352 logs.go:276] 1 containers: [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75]
	I0520 10:24:18.768095   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:18.772571   12352 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 10:24:18.772642   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 10:24:18.813105   12352 cri.go:89] found id: "0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27"
	I0520 10:24:18.813128   12352 cri.go:89] found id: ""
	I0520 10:24:18.813135   12352 logs.go:276] 1 containers: [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27]
	I0520 10:24:18.813177   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:18.817579   12352 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 10:24:18.817637   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 10:24:18.880226   12352 cri.go:89] found id: "0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2"
	I0520 10:24:18.880253   12352 cri.go:89] found id: ""
	I0520 10:24:18.880262   12352 logs.go:276] 1 containers: [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2]
	I0520 10:24:18.880314   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:18.888993   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 10:24:18.889078   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 10:24:18.932538   12352 cri.go:89] found id: "78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e"
	I0520 10:24:18.932557   12352 cri.go:89] found id: ""
	I0520 10:24:18.932564   12352 logs.go:276] 1 containers: [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e]
	I0520 10:24:18.932613   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:18.937063   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 10:24:18.937133   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 10:24:18.989617   12352 cri.go:89] found id: "dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad"
	I0520 10:24:18.989643   12352 cri.go:89] found id: ""
	I0520 10:24:18.989652   12352 logs.go:276] 1 containers: [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad]
	I0520 10:24:18.989710   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:18.995025   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 10:24:18.995106   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 10:24:19.037763   12352 cri.go:89] found id: "01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136"
	I0520 10:24:19.037784   12352 cri.go:89] found id: ""
	I0520 10:24:19.037793   12352 logs.go:276] 1 containers: [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136]
	I0520 10:24:19.037851   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:19.042269   12352 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 10:24:19.042327   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 10:24:19.102165   12352 cri.go:89] found id: ""
	I0520 10:24:19.102188   12352 logs.go:276] 0 containers: []
	W0520 10:24:19.102197   12352 logs.go:278] No container was found matching "kindnet"
	I0520 10:24:19.102205   12352 logs.go:123] Gathering logs for CRI-O ...
	I0520 10:24:19.102215   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 10:24:20.072292   12352 logs.go:123] Gathering logs for container status ...
	I0520 10:24:20.072340   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 10:24:20.125541   12352 logs.go:123] Gathering logs for describe nodes ...
	I0520 10:24:20.125575   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 10:24:20.265478   12352 logs.go:123] Gathering logs for kube-apiserver [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75] ...
	I0520 10:24:20.265504   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75"
	I0520 10:24:20.319540   12352 logs.go:123] Gathering logs for coredns [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2] ...
	I0520 10:24:20.319575   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2"
	I0520 10:24:20.362397   12352 logs.go:123] Gathering logs for kube-scheduler [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e] ...
	I0520 10:24:20.362427   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e"
	I0520 10:24:20.412157   12352 logs.go:123] Gathering logs for kube-proxy [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad] ...
	I0520 10:24:20.412193   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad"
	I0520 10:24:20.448477   12352 logs.go:123] Gathering logs for kube-controller-manager [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136] ...
	I0520 10:24:20.448504   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136"
	I0520 10:24:20.512304   12352 logs.go:123] Gathering logs for kubelet ...
	I0520 10:24:20.512343   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 10:24:20.575439   12352 logs.go:138] Found kubelet problem: May 20 10:22:44 addons-807933 kubelet[1277]: W0520 10:22:44.644175    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	W0520 10:24:20.575595   12352 logs.go:138] Found kubelet problem: May 20 10:22:44 addons-807933 kubelet[1277]: E0520 10:22:44.644207    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	I0520 10:24:20.595325   12352 logs.go:123] Gathering logs for dmesg ...
	I0520 10:24:20.595345   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 10:24:20.611572   12352 logs.go:123] Gathering logs for etcd [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27] ...
	I0520 10:24:20.611599   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27"
	I0520 10:24:20.667129   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:20.667151   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 10:24:20.667198   12352 out.go:239] X Problems detected in kubelet:
	W0520 10:24:20.667209   12352 out.go:239]   May 20 10:22:44 addons-807933 kubelet[1277]: W0520 10:22:44.644175    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	W0520 10:24:20.667218   12352 out.go:239]   May 20 10:22:44 addons-807933 kubelet[1277]: E0520 10:22:44.644207    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	I0520 10:24:20.667226   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:20.667233   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:24:30.668372   12352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:24:30.688716   12352 api_server.go:72] duration metric: took 1m58.103188006s to wait for apiserver process to appear ...
	I0520 10:24:30.688744   12352 api_server.go:88] waiting for apiserver healthz status ...
	I0520 10:24:30.688776   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 10:24:30.688834   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 10:24:30.738498   12352 cri.go:89] found id: "f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75"
	I0520 10:24:30.738524   12352 cri.go:89] found id: ""
	I0520 10:24:30.738533   12352 logs.go:276] 1 containers: [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75]
	I0520 10:24:30.738588   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:30.743295   12352 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 10:24:30.743355   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 10:24:30.785220   12352 cri.go:89] found id: "0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27"
	I0520 10:24:30.785244   12352 cri.go:89] found id: ""
	I0520 10:24:30.785253   12352 logs.go:276] 1 containers: [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27]
	I0520 10:24:30.785310   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:30.789718   12352 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 10:24:30.789771   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 10:24:30.830436   12352 cri.go:89] found id: "0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2"
	I0520 10:24:30.830460   12352 cri.go:89] found id: ""
	I0520 10:24:30.830469   12352 logs.go:276] 1 containers: [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2]
	I0520 10:24:30.830528   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:30.834815   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 10:24:30.834879   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 10:24:30.876193   12352 cri.go:89] found id: "78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e"
	I0520 10:24:30.876212   12352 cri.go:89] found id: ""
	I0520 10:24:30.876219   12352 logs.go:276] 1 containers: [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e]
	I0520 10:24:30.876269   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:30.880999   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 10:24:30.881064   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 10:24:30.923098   12352 cri.go:89] found id: "dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad"
	I0520 10:24:30.923126   12352 cri.go:89] found id: ""
	I0520 10:24:30.923135   12352 logs.go:276] 1 containers: [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad]
	I0520 10:24:30.923181   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:30.927467   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 10:24:30.927517   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 10:24:30.972749   12352 cri.go:89] found id: "01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136"
	I0520 10:24:30.972763   12352 cri.go:89] found id: ""
	I0520 10:24:30.972770   12352 logs.go:276] 1 containers: [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136]
	I0520 10:24:30.972812   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:30.977493   12352 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 10:24:30.977547   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 10:24:31.020852   12352 cri.go:89] found id: ""
	I0520 10:24:31.020875   12352 logs.go:276] 0 containers: []
	W0520 10:24:31.020884   12352 logs.go:278] No container was found matching "kindnet"
	I0520 10:24:31.020893   12352 logs.go:123] Gathering logs for kube-apiserver [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75] ...
	I0520 10:24:31.020909   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75"
	I0520 10:24:31.075622   12352 logs.go:123] Gathering logs for etcd [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27] ...
	I0520 10:24:31.075650   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27"
	I0520 10:24:31.146942   12352 logs.go:123] Gathering logs for container status ...
	I0520 10:24:31.146975   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 10:24:31.208712   12352 logs.go:123] Gathering logs for kube-proxy [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad] ...
	I0520 10:24:31.208740   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad"
	I0520 10:24:31.247891   12352 logs.go:123] Gathering logs for kube-controller-manager [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136] ...
	I0520 10:24:31.247920   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136"
	I0520 10:24:31.321221   12352 logs.go:123] Gathering logs for CRI-O ...
	I0520 10:24:31.321253   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 10:24:32.208184   12352 logs.go:123] Gathering logs for kubelet ...
	I0520 10:24:32.208223   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 10:24:32.277155   12352 logs.go:138] Found kubelet problem: May 20 10:22:44 addons-807933 kubelet[1277]: W0520 10:22:44.644175    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	W0520 10:24:32.277312   12352 logs.go:138] Found kubelet problem: May 20 10:22:44 addons-807933 kubelet[1277]: E0520 10:22:44.644207    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	I0520 10:24:32.297331   12352 logs.go:123] Gathering logs for dmesg ...
	I0520 10:24:32.297353   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 10:24:32.312440   12352 logs.go:123] Gathering logs for describe nodes ...
	I0520 10:24:32.312466   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 10:24:32.440156   12352 logs.go:123] Gathering logs for coredns [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2] ...
	I0520 10:24:32.440182   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2"
	I0520 10:24:32.482207   12352 logs.go:123] Gathering logs for kube-scheduler [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e] ...
	I0520 10:24:32.482238   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e"
	I0520 10:24:32.528151   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:32.528178   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 10:24:32.528226   12352 out.go:239] X Problems detected in kubelet:
	W0520 10:24:32.528236   12352 out.go:239]   May 20 10:22:44 addons-807933 kubelet[1277]: W0520 10:22:44.644175    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	W0520 10:24:32.528244   12352 out.go:239]   May 20 10:22:44 addons-807933 kubelet[1277]: E0520 10:22:44.644207    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	I0520 10:24:32.528251   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:32.528258   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:24:42.529965   12352 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I0520 10:24:42.534312   12352 api_server.go:279] https://192.168.39.86:8443/healthz returned 200:
	ok
	I0520 10:24:42.535368   12352 api_server.go:141] control plane version: v1.30.1
	I0520 10:24:42.535389   12352 api_server.go:131] duration metric: took 11.846639044s to wait for apiserver health ...
	I0520 10:24:42.535396   12352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 10:24:42.535414   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 10:24:42.535460   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 10:24:42.585491   12352 cri.go:89] found id: "f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75"
	I0520 10:24:42.585512   12352 cri.go:89] found id: ""
	I0520 10:24:42.585521   12352 logs.go:276] 1 containers: [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75]
	I0520 10:24:42.585578   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:42.589902   12352 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 10:24:42.589959   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 10:24:42.627921   12352 cri.go:89] found id: "0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27"
	I0520 10:24:42.627939   12352 cri.go:89] found id: ""
	I0520 10:24:42.627945   12352 logs.go:276] 1 containers: [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27]
	I0520 10:24:42.627987   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:42.632346   12352 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 10:24:42.632401   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 10:24:42.674788   12352 cri.go:89] found id: "0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2"
	I0520 10:24:42.674806   12352 cri.go:89] found id: ""
	I0520 10:24:42.674819   12352 logs.go:276] 1 containers: [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2]
	I0520 10:24:42.674865   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:42.680857   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 10:24:42.680921   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 10:24:42.719199   12352 cri.go:89] found id: "78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e"
	I0520 10:24:42.719218   12352 cri.go:89] found id: ""
	I0520 10:24:42.719225   12352 logs.go:276] 1 containers: [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e]
	I0520 10:24:42.719269   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:42.723653   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 10:24:42.723708   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 10:24:42.762888   12352 cri.go:89] found id: "dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad"
	I0520 10:24:42.762913   12352 cri.go:89] found id: ""
	I0520 10:24:42.762921   12352 logs.go:276] 1 containers: [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad]
	I0520 10:24:42.762975   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:42.767276   12352 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 10:24:42.767324   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 10:24:42.810973   12352 cri.go:89] found id: "01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136"
	I0520 10:24:42.810993   12352 cri.go:89] found id: ""
	I0520 10:24:42.811005   12352 logs.go:276] 1 containers: [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136]
	I0520 10:24:42.811061   12352 ssh_runner.go:195] Run: which crictl
	I0520 10:24:42.815644   12352 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 10:24:42.815705   12352 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 10:24:42.855406   12352 cri.go:89] found id: ""
	I0520 10:24:42.855443   12352 logs.go:276] 0 containers: []
	W0520 10:24:42.855453   12352 logs.go:278] No container was found matching "kindnet"
	I0520 10:24:42.855465   12352 logs.go:123] Gathering logs for describe nodes ...
	I0520 10:24:42.855480   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 10:24:42.975833   12352 logs.go:123] Gathering logs for coredns [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2] ...
	I0520 10:24:42.975867   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2"
	I0520 10:24:43.016882   12352 logs.go:123] Gathering logs for dmesg ...
	I0520 10:24:43.016911   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 10:24:43.032116   12352 logs.go:123] Gathering logs for kube-apiserver [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75] ...
	I0520 10:24:43.032142   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75"
	I0520 10:24:43.081740   12352 logs.go:123] Gathering logs for etcd [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27] ...
	I0520 10:24:43.081773   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27"
	I0520 10:24:43.136339   12352 logs.go:123] Gathering logs for kube-scheduler [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e] ...
	I0520 10:24:43.136373   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e"
	I0520 10:24:43.187372   12352 logs.go:123] Gathering logs for kube-proxy [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad] ...
	I0520 10:24:43.187400   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad"
	I0520 10:24:43.226989   12352 logs.go:123] Gathering logs for kube-controller-manager [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136] ...
	I0520 10:24:43.227014   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136"
	I0520 10:24:43.285466   12352 logs.go:123] Gathering logs for CRI-O ...
	I0520 10:24:43.285495   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 10:24:44.058186   12352 logs.go:123] Gathering logs for container status ...
	I0520 10:24:44.058230   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 10:24:44.116999   12352 logs.go:123] Gathering logs for kubelet ...
	I0520 10:24:44.117034   12352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 10:24:44.180167   12352 logs.go:138] Found kubelet problem: May 20 10:22:44 addons-807933 kubelet[1277]: W0520 10:22:44.644175    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	W0520 10:24:44.180350   12352 logs.go:138] Found kubelet problem: May 20 10:22:44 addons-807933 kubelet[1277]: E0520 10:22:44.644207    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	I0520 10:24:44.201294   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:44.201318   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 10:24:44.201377   12352 out.go:239] X Problems detected in kubelet:
	W0520 10:24:44.201391   12352 out.go:239]   May 20 10:22:44 addons-807933 kubelet[1277]: W0520 10:22:44.644175    1277 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	W0520 10:24:44.201400   12352 out.go:239]   May 20 10:22:44 addons-807933 kubelet[1277]: E0520 10:22:44.644207    1277 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-807933" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-807933' and this object
	I0520 10:24:44.201413   12352 out.go:304] Setting ErrFile to fd 2...
	I0520 10:24:44.201424   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:24:54.211778   12352 system_pods.go:59] 18 kube-system pods found
	I0520 10:24:54.211817   12352 system_pods.go:61] "coredns-7db6d8ff4d-mq6st" [daaf804f-262b-4746-bc6e-d36538c125eb] Running
	I0520 10:24:54.211823   12352 system_pods.go:61] "csi-hostpath-attacher-0" [b4ccbec5-2ba2-43dc-930a-e427da3ffae1] Running
	I0520 10:24:54.211828   12352 system_pods.go:61] "csi-hostpath-resizer-0" [ecec8bb0-5ce1-4b2b-bfff-3193a56d99fd] Running
	I0520 10:24:54.211832   12352 system_pods.go:61] "csi-hostpathplugin-mckcj" [281dad2c-e7c6-4f48-9908-f6d1749c2207] Running
	I0520 10:24:54.211836   12352 system_pods.go:61] "etcd-addons-807933" [2d966690-4f95-4ba8-98a3-68fa25fb87d5] Running
	I0520 10:24:54.211842   12352 system_pods.go:61] "kube-apiserver-addons-807933" [9eb73c06-6275-419c-a0d8-7a8048b07d89] Running
	I0520 10:24:54.211846   12352 system_pods.go:61] "kube-controller-manager-addons-807933" [b51f0eab-78d0-4d6f-b57a-3128ac4e5e32] Running
	I0520 10:24:54.211851   12352 system_pods.go:61] "kube-ingress-dns-minikube" [29cc0a3f-eade-48c7-89c4-00bb5444237c] Running
	I0520 10:24:54.211855   12352 system_pods.go:61] "kube-proxy-twglf" [6854df94-6335-4952-ba2e-a0d93ed0be5c] Running
	I0520 10:24:54.211859   12352 system_pods.go:61] "kube-scheduler-addons-807933" [35a08f65-737a-49e3-ac2d-ab8ed326766f] Running
	I0520 10:24:54.211864   12352 system_pods.go:61] "metrics-server-c59844bb4-8j5ls" [bcafd302-7969-429c-b306-290348b05d85] Running
	I0520 10:24:54.211868   12352 system_pods.go:61] "nvidia-device-plugin-daemonset-2bwlq" [2f6875de-2696-4e7c-beb9-27efa228893f] Running
	I0520 10:24:54.211873   12352 system_pods.go:61] "registry-26dx9" [d90bb428-cf8d-481e-8467-e72e21658dcd] Running
	I0520 10:24:54.211878   12352 system_pods.go:61] "registry-proxy-vw56p" [d92f24a6-e6e3-4551-991f-30a570338e6a] Running
	I0520 10:24:54.211883   12352 system_pods.go:61] "snapshot-controller-745499f584-p4wsh" [93319461-fdba-42b8-956b-f1ac8250d1b2] Running
	I0520 10:24:54.211889   12352 system_pods.go:61] "snapshot-controller-745499f584-xlkcl" [5c4efb7e-3a0b-41f9-8e68-526168622579] Running
	I0520 10:24:54.211896   12352 system_pods.go:61] "storage-provisioner" [6742392f-cc63-40e3-8735-d483096cc9d1] Running
	I0520 10:24:54.211902   12352 system_pods.go:61] "tiller-deploy-6677d64bcd-lmqq4" [5327dee8-aaa4-44c5-9cf0-a904ab8b1b4a] Running
	I0520 10:24:54.211910   12352 system_pods.go:74] duration metric: took 11.676508836s to wait for pod list to return data ...
	I0520 10:24:54.211922   12352 default_sa.go:34] waiting for default service account to be created ...
	I0520 10:24:54.214088   12352 default_sa.go:45] found service account: "default"
	I0520 10:24:54.214121   12352 default_sa.go:55] duration metric: took 2.188673ms for default service account to be created ...
	I0520 10:24:54.214130   12352 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 10:24:54.221870   12352 system_pods.go:86] 18 kube-system pods found
	I0520 10:24:54.221897   12352 system_pods.go:89] "coredns-7db6d8ff4d-mq6st" [daaf804f-262b-4746-bc6e-d36538c125eb] Running
	I0520 10:24:54.221905   12352 system_pods.go:89] "csi-hostpath-attacher-0" [b4ccbec5-2ba2-43dc-930a-e427da3ffae1] Running
	I0520 10:24:54.221911   12352 system_pods.go:89] "csi-hostpath-resizer-0" [ecec8bb0-5ce1-4b2b-bfff-3193a56d99fd] Running
	I0520 10:24:54.221917   12352 system_pods.go:89] "csi-hostpathplugin-mckcj" [281dad2c-e7c6-4f48-9908-f6d1749c2207] Running
	I0520 10:24:54.221923   12352 system_pods.go:89] "etcd-addons-807933" [2d966690-4f95-4ba8-98a3-68fa25fb87d5] Running
	I0520 10:24:54.221929   12352 system_pods.go:89] "kube-apiserver-addons-807933" [9eb73c06-6275-419c-a0d8-7a8048b07d89] Running
	I0520 10:24:54.221935   12352 system_pods.go:89] "kube-controller-manager-addons-807933" [b51f0eab-78d0-4d6f-b57a-3128ac4e5e32] Running
	I0520 10:24:54.221942   12352 system_pods.go:89] "kube-ingress-dns-minikube" [29cc0a3f-eade-48c7-89c4-00bb5444237c] Running
	I0520 10:24:54.221951   12352 system_pods.go:89] "kube-proxy-twglf" [6854df94-6335-4952-ba2e-a0d93ed0be5c] Running
	I0520 10:24:54.221958   12352 system_pods.go:89] "kube-scheduler-addons-807933" [35a08f65-737a-49e3-ac2d-ab8ed326766f] Running
	I0520 10:24:54.221968   12352 system_pods.go:89] "metrics-server-c59844bb4-8j5ls" [bcafd302-7969-429c-b306-290348b05d85] Running
	I0520 10:24:54.221974   12352 system_pods.go:89] "nvidia-device-plugin-daemonset-2bwlq" [2f6875de-2696-4e7c-beb9-27efa228893f] Running
	I0520 10:24:54.221981   12352 system_pods.go:89] "registry-26dx9" [d90bb428-cf8d-481e-8467-e72e21658dcd] Running
	I0520 10:24:54.221989   12352 system_pods.go:89] "registry-proxy-vw56p" [d92f24a6-e6e3-4551-991f-30a570338e6a] Running
	I0520 10:24:54.221996   12352 system_pods.go:89] "snapshot-controller-745499f584-p4wsh" [93319461-fdba-42b8-956b-f1ac8250d1b2] Running
	I0520 10:24:54.222006   12352 system_pods.go:89] "snapshot-controller-745499f584-xlkcl" [5c4efb7e-3a0b-41f9-8e68-526168622579] Running
	I0520 10:24:54.222013   12352 system_pods.go:89] "storage-provisioner" [6742392f-cc63-40e3-8735-d483096cc9d1] Running
	I0520 10:24:54.222022   12352 system_pods.go:89] "tiller-deploy-6677d64bcd-lmqq4" [5327dee8-aaa4-44c5-9cf0-a904ab8b1b4a] Running
	I0520 10:24:54.222030   12352 system_pods.go:126] duration metric: took 7.892886ms to wait for k8s-apps to be running ...
	I0520 10:24:54.222039   12352 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 10:24:54.222095   12352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:24:54.245886   12352 system_svc.go:56] duration metric: took 23.840706ms WaitForService to wait for kubelet
	I0520 10:24:54.245919   12352 kubeadm.go:576] duration metric: took 2m21.660395751s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:24:54.245944   12352 node_conditions.go:102] verifying NodePressure condition ...
	I0520 10:24:54.249069   12352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 10:24:54.249091   12352 node_conditions.go:123] node cpu capacity is 2
	I0520 10:24:54.249104   12352 node_conditions.go:105] duration metric: took 3.154722ms to run NodePressure ...
	I0520 10:24:54.249115   12352 start.go:240] waiting for startup goroutines ...
	I0520 10:24:54.249124   12352 start.go:245] waiting for cluster config update ...
	I0520 10:24:54.249138   12352 start.go:254] writing updated cluster config ...
	I0520 10:24:54.249400   12352 ssh_runner.go:195] Run: rm -f paused
	I0520 10:24:54.299257   12352 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 10:24:54.301323   12352 out.go:177] * Done! kubectl is now configured to use "addons-807933" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.796971745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716201050796945940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584529,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99bed2a5-19ad-4944-81ff-e9f7976f058a name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.798439806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65bf4eb0-81d8-4122-86c9-3f30d078da07 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.798518869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65bf4eb0-81d8-4122-86c9-3f30d078da07 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.804034979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25edf660d8e0349eab2c24d609c085b354a740db1500ee0ee30ebcfd0603f144,PodSandboxId:a2e7c7822fa76c3b9c02274ed92d1008a5dad3b91aa05077e8f901862ff71655,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716200862405014463,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-85gqz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d1a9475-8be4-4408-be74-84d6dd2f8b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b360723,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f04d6299b3c59db2d1b2009b97a2e6f596a6f70d700f8dd92710a6987519b8,PodSandboxId:0df6c01c2dac8f3e2de8c44383cdded0ff21aadf59245ad39dcf1b0f603e6266,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716200721959483961,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c7de491-1690-49fe-8238-f9e1b0120d62,},Annotations:map[string]string{io.kubern
etes.container.hash: f69fdaa6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3abcb59b60ea8124e50095e6b5505433e8979856645b96d9f626645bfca5e875,PodSandboxId:6ac45ca42c7f7717afb277078122ed11bc2f5c7da093a64844da94689ffc1dd4,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716200706373944840,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tmsmw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 3a27a456-c5bb-455f-9228-1176f6695168,},Annotations:map[string]string{io.kubernetes.container.hash: 6618428d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4ba5e754ff4a58ca20ee1cba03cfbe2b2e66f80284195ad40a28e381863ff3,PodSandboxId:a14f513ff363ebf9764340cd7b8b9244c4a4c27365ef5fffdcd61e53e9e3f744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716200638625275107,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-6cf67,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a0535f01-6cab-4cf2-8e4a-35bf2f764e62,},Annotations:map[string]string{io.kubernetes.container.hash: 8bce9a32,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cd4b11836ec1caece9bd19f5fb75a2f9a95cec04fe5c83859e36e621c5a90,PodSandboxId:b435b439acd4093a2e37a7c79496a6073ac5bcec0bffe76f0250a600a6b2804e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,Stat
e:CONTAINER_RUNNING,CreatedAt:1716200615076945512,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-l5zbg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f12e41f5-3788-4d37-a744-248e4a658254,},Annotations:map[string]string{io.kubernetes.container.hash: ea447b4c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e96316e67aa9833fd4520bbee6643a19d991aa16517c65fe40dffcee51f274,PodSandboxId:47f698f1ebf8e0145c7803f37e5754d179cbe8cac157acd0b94a7ecb4839a320,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f
75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716200605002947565,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-pm765,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 485d8c24-bc61-48a4-a4c0-b9b52de3ea25,},Annotations:map[string]string{io.kubernetes.container.hash: 6cff23d4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2ad193ac9f00d256ecd39b436bf964eb0d37b7b864145a28031abad002faf5,PodSandboxId:c90dcfa0d09b6e15a6ad2b46e327d0e8ee7b34aff08a9ee93fb29a9ffd837bbc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716200588326514008,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8j5ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcafd302-7969-429c-b306-290348b05d85,},Annotations:map[string]string{io.kubernetes.container.hash: b0b217f4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bf199e010eb3aaef286a369bfab11ebacdf0fa8aafe96c6f2bdd660c9a4c88,PodSandboxId:39879541f96cda7e362099633dad874666cee2cb433f49d1d83399de032493f5,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716200559437008065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6742392f-cc63-40e3-8735-d483096cc9d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7446f7ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2,PodSandboxId:ddca6ca50d59c0f8d15a3ce84a63738e80b0e04a25fbd14d340f7e86c3809a51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716200555854745557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mq6st,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daaf804f-262b-4746-bc6e-d36538c125eb,},Annotations:map[string]string{io.kubernetes.container.hash: e4a4e5da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb
511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad,PodSandboxId:204ea35171763bae1ca9eb27e81e70fe57c5f124aea3405b582f493052272a8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716200553898429247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twglf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6854df94-6335-4952-ba2e-a0d93ed0be5c,},Annotations:map[string]string{io.kubernetes.container.hash: 5af81c42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b20ded1733cce3ca0614825c3a8243c49f
4fdc3165d51d5bdbf80d51219b0e,PodSandboxId:46199d25b6624d514576a4aeb41f5d81193e5fcf1cf811a7e33b36aaafa683ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716200533662045003,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a66e920b1bbc9702b2e72fd1d02091,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bce967d30acaffabbda8c913f4d582e5ff41585bfba58073963
23c8cad1136,PodSandboxId:97fecee9f0945fa597fccf12a6c63fae54f25bd68083e8e9d00952d9289fafed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716200533637665567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6de6c8b447877c49cad2d9c3c6e55a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc118a7deeb385af7c8c02997ad3c01f4ac7604711
393a111cce06f09c0dc27,PodSandboxId:328c8e8349f476119a634d0ecfbd9b7adf95ad4840a02ad6a34fa596e92d84b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716200533601660269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170b1dec80d4c835528c7c40c5df9bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4e35b8b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75,PodSandboxId:42ec4aa1d5e4
867e31253ddd10c408c25883d9fc8bdb92decba052d4302dc2b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716200533552390047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b9f91033eaca22c7aded584f87209e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e34cd90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65bf4eb0-81d8-4122-86c9-3f30d078da07 name=/runtime.v1.RuntimeService/Lis
tContainers
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.850664323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eaba6944-7567-42c7-9280-aaea5b5bef6b name=/runtime.v1.RuntimeService/Version
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.850807901Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eaba6944-7567-42c7-9280-aaea5b5bef6b name=/runtime.v1.RuntimeService/Version
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.853553647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=808040d4-1436-4862-9d34-70ed39e70ffd name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.856224572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716201050856195124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584529,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=808040d4-1436-4862-9d34-70ed39e70ffd name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.857174210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a424f5b8-1bf1-4e7b-9f45-e15c1922b796 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.857392489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a424f5b8-1bf1-4e7b-9f45-e15c1922b796 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.858147380Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25edf660d8e0349eab2c24d609c085b354a740db1500ee0ee30ebcfd0603f144,PodSandboxId:a2e7c7822fa76c3b9c02274ed92d1008a5dad3b91aa05077e8f901862ff71655,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716200862405014463,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-85gqz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d1a9475-8be4-4408-be74-84d6dd2f8b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b360723,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f04d6299b3c59db2d1b2009b97a2e6f596a6f70d700f8dd92710a6987519b8,PodSandboxId:0df6c01c2dac8f3e2de8c44383cdded0ff21aadf59245ad39dcf1b0f603e6266,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716200721959483961,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c7de491-1690-49fe-8238-f9e1b0120d62,},Annotations:map[string]string{io.kubern
etes.container.hash: f69fdaa6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3abcb59b60ea8124e50095e6b5505433e8979856645b96d9f626645bfca5e875,PodSandboxId:6ac45ca42c7f7717afb277078122ed11bc2f5c7da093a64844da94689ffc1dd4,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716200706373944840,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tmsmw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 3a27a456-c5bb-455f-9228-1176f6695168,},Annotations:map[string]string{io.kubernetes.container.hash: 6618428d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4ba5e754ff4a58ca20ee1cba03cfbe2b2e66f80284195ad40a28e381863ff3,PodSandboxId:a14f513ff363ebf9764340cd7b8b9244c4a4c27365ef5fffdcd61e53e9e3f744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716200638625275107,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-6cf67,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a0535f01-6cab-4cf2-8e4a-35bf2f764e62,},Annotations:map[string]string{io.kubernetes.container.hash: 8bce9a32,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cd4b11836ec1caece9bd19f5fb75a2f9a95cec04fe5c83859e36e621c5a90,PodSandboxId:b435b439acd4093a2e37a7c79496a6073ac5bcec0bffe76f0250a600a6b2804e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,Stat
e:CONTAINER_RUNNING,CreatedAt:1716200615076945512,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-l5zbg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f12e41f5-3788-4d37-a744-248e4a658254,},Annotations:map[string]string{io.kubernetes.container.hash: ea447b4c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e96316e67aa9833fd4520bbee6643a19d991aa16517c65fe40dffcee51f274,PodSandboxId:47f698f1ebf8e0145c7803f37e5754d179cbe8cac157acd0b94a7ecb4839a320,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f
75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716200605002947565,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-pm765,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 485d8c24-bc61-48a4-a4c0-b9b52de3ea25,},Annotations:map[string]string{io.kubernetes.container.hash: 6cff23d4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2ad193ac9f00d256ecd39b436bf964eb0d37b7b864145a28031abad002faf5,PodSandboxId:c90dcfa0d09b6e15a6ad2b46e327d0e8ee7b34aff08a9ee93fb29a9ffd837bbc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716200588326514008,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8j5ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcafd302-7969-429c-b306-290348b05d85,},Annotations:map[string]string{io.kubernetes.container.hash: b0b217f4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bf199e010eb3aaef286a369bfab11ebacdf0fa8aafe96c6f2bdd660c9a4c88,PodSandboxId:39879541f96cda7e362099633dad874666cee2cb433f49d1d83399de032493f5,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716200559437008065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6742392f-cc63-40e3-8735-d483096cc9d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7446f7ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2,PodSandboxId:ddca6ca50d59c0f8d15a3ce84a63738e80b0e04a25fbd14d340f7e86c3809a51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716200555854745557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mq6st,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daaf804f-262b-4746-bc6e-d36538c125eb,},Annotations:map[string]string{io.kubernetes.container.hash: e4a4e5da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb
511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad,PodSandboxId:204ea35171763bae1ca9eb27e81e70fe57c5f124aea3405b582f493052272a8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716200553898429247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twglf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6854df94-6335-4952-ba2e-a0d93ed0be5c,},Annotations:map[string]string{io.kubernetes.container.hash: 5af81c42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b20ded1733cce3ca0614825c3a8243c49f
4fdc3165d51d5bdbf80d51219b0e,PodSandboxId:46199d25b6624d514576a4aeb41f5d81193e5fcf1cf811a7e33b36aaafa683ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716200533662045003,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a66e920b1bbc9702b2e72fd1d02091,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bce967d30acaffabbda8c913f4d582e5ff41585bfba58073963
23c8cad1136,PodSandboxId:97fecee9f0945fa597fccf12a6c63fae54f25bd68083e8e9d00952d9289fafed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716200533637665567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6de6c8b447877c49cad2d9c3c6e55a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc118a7deeb385af7c8c02997ad3c01f4ac7604711
393a111cce06f09c0dc27,PodSandboxId:328c8e8349f476119a634d0ecfbd9b7adf95ad4840a02ad6a34fa596e92d84b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716200533601660269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170b1dec80d4c835528c7c40c5df9bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4e35b8b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75,PodSandboxId:42ec4aa1d5e4
867e31253ddd10c408c25883d9fc8bdb92decba052d4302dc2b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716200533552390047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b9f91033eaca22c7aded584f87209e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e34cd90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a424f5b8-1bf1-4e7b-9f45-e15c1922b796 name=/runtime.v1.RuntimeService/Lis
tContainers
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.894935032Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63dd6b00-be58-4d5d-99ee-b4a2d9773207 name=/runtime.v1.RuntimeService/Version
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.895024847Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63dd6b00-be58-4d5d-99ee-b4a2d9773207 name=/runtime.v1.RuntimeService/Version
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.897090069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d8b659e-2068-44cc-bb59-954c59a66c08 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.898987763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716201050898958942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584529,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d8b659e-2068-44cc-bb59-954c59a66c08 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.899648463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbb38bb4-ca13-4603-90ed-37496af83f3f name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.899718313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbb38bb4-ca13-4603-90ed-37496af83f3f name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.900012110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25edf660d8e0349eab2c24d609c085b354a740db1500ee0ee30ebcfd0603f144,PodSandboxId:a2e7c7822fa76c3b9c02274ed92d1008a5dad3b91aa05077e8f901862ff71655,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716200862405014463,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-85gqz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d1a9475-8be4-4408-be74-84d6dd2f8b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b360723,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f04d6299b3c59db2d1b2009b97a2e6f596a6f70d700f8dd92710a6987519b8,PodSandboxId:0df6c01c2dac8f3e2de8c44383cdded0ff21aadf59245ad39dcf1b0f603e6266,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716200721959483961,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c7de491-1690-49fe-8238-f9e1b0120d62,},Annotations:map[string]string{io.kubern
etes.container.hash: f69fdaa6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3abcb59b60ea8124e50095e6b5505433e8979856645b96d9f626645bfca5e875,PodSandboxId:6ac45ca42c7f7717afb277078122ed11bc2f5c7da093a64844da94689ffc1dd4,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716200706373944840,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tmsmw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 3a27a456-c5bb-455f-9228-1176f6695168,},Annotations:map[string]string{io.kubernetes.container.hash: 6618428d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4ba5e754ff4a58ca20ee1cba03cfbe2b2e66f80284195ad40a28e381863ff3,PodSandboxId:a14f513ff363ebf9764340cd7b8b9244c4a4c27365ef5fffdcd61e53e9e3f744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716200638625275107,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-6cf67,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a0535f01-6cab-4cf2-8e4a-35bf2f764e62,},Annotations:map[string]string{io.kubernetes.container.hash: 8bce9a32,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cd4b11836ec1caece9bd19f5fb75a2f9a95cec04fe5c83859e36e621c5a90,PodSandboxId:b435b439acd4093a2e37a7c79496a6073ac5bcec0bffe76f0250a600a6b2804e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,Stat
e:CONTAINER_RUNNING,CreatedAt:1716200615076945512,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-l5zbg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f12e41f5-3788-4d37-a744-248e4a658254,},Annotations:map[string]string{io.kubernetes.container.hash: ea447b4c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e96316e67aa9833fd4520bbee6643a19d991aa16517c65fe40dffcee51f274,PodSandboxId:47f698f1ebf8e0145c7803f37e5754d179cbe8cac157acd0b94a7ecb4839a320,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f
75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716200605002947565,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-pm765,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 485d8c24-bc61-48a4-a4c0-b9b52de3ea25,},Annotations:map[string]string{io.kubernetes.container.hash: 6cff23d4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2ad193ac9f00d256ecd39b436bf964eb0d37b7b864145a28031abad002faf5,PodSandboxId:c90dcfa0d09b6e15a6ad2b46e327d0e8ee7b34aff08a9ee93fb29a9ffd837bbc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716200588326514008,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8j5ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcafd302-7969-429c-b306-290348b05d85,},Annotations:map[string]string{io.kubernetes.container.hash: b0b217f4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bf199e010eb3aaef286a369bfab11ebacdf0fa8aafe96c6f2bdd660c9a4c88,PodSandboxId:39879541f96cda7e362099633dad874666cee2cb433f49d1d83399de032493f5,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716200559437008065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6742392f-cc63-40e3-8735-d483096cc9d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7446f7ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2,PodSandboxId:ddca6ca50d59c0f8d15a3ce84a63738e80b0e04a25fbd14d340f7e86c3809a51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716200555854745557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mq6st,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daaf804f-262b-4746-bc6e-d36538c125eb,},Annotations:map[string]string{io.kubernetes.container.hash: e4a4e5da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb
511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad,PodSandboxId:204ea35171763bae1ca9eb27e81e70fe57c5f124aea3405b582f493052272a8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716200553898429247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twglf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6854df94-6335-4952-ba2e-a0d93ed0be5c,},Annotations:map[string]string{io.kubernetes.container.hash: 5af81c42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b20ded1733cce3ca0614825c3a8243c49f
4fdc3165d51d5bdbf80d51219b0e,PodSandboxId:46199d25b6624d514576a4aeb41f5d81193e5fcf1cf811a7e33b36aaafa683ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716200533662045003,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a66e920b1bbc9702b2e72fd1d02091,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bce967d30acaffabbda8c913f4d582e5ff41585bfba58073963
23c8cad1136,PodSandboxId:97fecee9f0945fa597fccf12a6c63fae54f25bd68083e8e9d00952d9289fafed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716200533637665567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6de6c8b447877c49cad2d9c3c6e55a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc118a7deeb385af7c8c02997ad3c01f4ac7604711
393a111cce06f09c0dc27,PodSandboxId:328c8e8349f476119a634d0ecfbd9b7adf95ad4840a02ad6a34fa596e92d84b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716200533601660269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170b1dec80d4c835528c7c40c5df9bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4e35b8b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75,PodSandboxId:42ec4aa1d5e4
867e31253ddd10c408c25883d9fc8bdb92decba052d4302dc2b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716200533552390047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b9f91033eaca22c7aded584f87209e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e34cd90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbb38bb4-ca13-4603-90ed-37496af83f3f name=/runtime.v1.RuntimeService/Lis
tContainers
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.939067746Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2939916-b27d-4c9b-9314-08e220745845 name=/runtime.v1.RuntimeService/Version
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.939259714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2939916-b27d-4c9b-9314-08e220745845 name=/runtime.v1.RuntimeService/Version
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.940143174Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=161daef8-540b-4658-bde7-f2858a6dea11 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.942003206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716201050941972659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584529,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=161daef8-540b-4658-bde7-f2858a6dea11 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.942668716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f7656c6-4ad5-4504-9e6b-633cc2f6899a name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.942726131Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f7656c6-4ad5-4504-9e6b-633cc2f6899a name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:30:50 addons-807933 crio[678]: time="2024-05-20 10:30:50.943260170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:25edf660d8e0349eab2c24d609c085b354a740db1500ee0ee30ebcfd0603f144,PodSandboxId:a2e7c7822fa76c3b9c02274ed92d1008a5dad3b91aa05077e8f901862ff71655,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716200862405014463,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-85gqz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d1a9475-8be4-4408-be74-84d6dd2f8b8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b360723,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22f04d6299b3c59db2d1b2009b97a2e6f596a6f70d700f8dd92710a6987519b8,PodSandboxId:0df6c01c2dac8f3e2de8c44383cdded0ff21aadf59245ad39dcf1b0f603e6266,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716200721959483961,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1c7de491-1690-49fe-8238-f9e1b0120d62,},Annotations:map[string]string{io.kubern
etes.container.hash: f69fdaa6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3abcb59b60ea8124e50095e6b5505433e8979856645b96d9f626645bfca5e875,PodSandboxId:6ac45ca42c7f7717afb277078122ed11bc2f5c7da093a64844da94689ffc1dd4,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716200706373944840,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tmsmw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 3a27a456-c5bb-455f-9228-1176f6695168,},Annotations:map[string]string{io.kubernetes.container.hash: 6618428d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa4ba5e754ff4a58ca20ee1cba03cfbe2b2e66f80284195ad40a28e381863ff3,PodSandboxId:a14f513ff363ebf9764340cd7b8b9244c4a4c27365ef5fffdcd61e53e9e3f744,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716200638625275107,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-6cf67,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a0535f01-6cab-4cf2-8e4a-35bf2f764e62,},Annotations:map[string]string{io.kubernetes.container.hash: 8bce9a32,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465cd4b11836ec1caece9bd19f5fb75a2f9a95cec04fe5c83859e36e621c5a90,PodSandboxId:b435b439acd4093a2e37a7c79496a6073ac5bcec0bffe76f0250a600a6b2804e,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,Stat
e:CONTAINER_RUNNING,CreatedAt:1716200615076945512,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-l5zbg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: f12e41f5-3788-4d37-a744-248e4a658254,},Annotations:map[string]string{io.kubernetes.container.hash: ea447b4c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e96316e67aa9833fd4520bbee6643a19d991aa16517c65fe40dffcee51f274,PodSandboxId:47f698f1ebf8e0145c7803f37e5754d179cbe8cac157acd0b94a7ecb4839a320,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f
75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1716200605002947565,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-pm765,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 485d8c24-bc61-48a4-a4c0-b9b52de3ea25,},Annotations:map[string]string{io.kubernetes.container.hash: 6cff23d4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2ad193ac9f00d256ecd39b436bf964eb0d37b7b864145a28031abad002faf5,PodSandboxId:c90dcfa0d09b6e15a6ad2b46e327d0e8ee7b34aff08a9ee93fb29a9ffd837bbc,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716200588326514008,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-8j5ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcafd302-7969-429c-b306-290348b05d85,},Annotations:map[string]string{io.kubernetes.container.hash: b0b217f4,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95bf199e010eb3aaef286a369bfab11ebacdf0fa8aafe96c6f2bdd660c9a4c88,PodSandboxId:39879541f96cda7e362099633dad874666cee2cb433f49d1d83399de032493f5,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716200559437008065,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6742392f-cc63-40e3-8735-d483096cc9d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7446f7ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2,PodSandboxId:ddca6ca50d59c0f8d15a3ce84a63738e80b0e04a25fbd14d340f7e86c3809a51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716200555854745557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mq6st,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daaf804f-262b-4746-bc6e-d36538c125eb,},Annotations:map[string]string{io.kubernetes.container.hash: e4a4e5da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb
511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad,PodSandboxId:204ea35171763bae1ca9eb27e81e70fe57c5f124aea3405b582f493052272a8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716200553898429247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twglf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6854df94-6335-4952-ba2e-a0d93ed0be5c,},Annotations:map[string]string{io.kubernetes.container.hash: 5af81c42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b20ded1733cce3ca0614825c3a8243c49f
4fdc3165d51d5bdbf80d51219b0e,PodSandboxId:46199d25b6624d514576a4aeb41f5d81193e5fcf1cf811a7e33b36aaafa683ea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716200533662045003,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2a66e920b1bbc9702b2e72fd1d02091,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01bce967d30acaffabbda8c913f4d582e5ff41585bfba58073963
23c8cad1136,PodSandboxId:97fecee9f0945fa597fccf12a6c63fae54f25bd68083e8e9d00952d9289fafed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716200533637665567,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be6de6c8b447877c49cad2d9c3c6e55a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc118a7deeb385af7c8c02997ad3c01f4ac7604711
393a111cce06f09c0dc27,PodSandboxId:328c8e8349f476119a634d0ecfbd9b7adf95ad4840a02ad6a34fa596e92d84b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716200533601660269,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 170b1dec80d4c835528c7c40c5df9bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4e35b8b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75,PodSandboxId:42ec4aa1d5e4
867e31253ddd10c408c25883d9fc8bdb92decba052d4302dc2b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716200533552390047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-807933,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2b9f91033eaca22c7aded584f87209e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e34cd90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f7656c6-4ad5-4504-9e6b-633cc2f6899a name=/runtime.v1.RuntimeService/Lis
tContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	25edf660d8e03       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 3 minutes ago       Running             hello-world-app           0                   a2e7c7822fa76       hello-world-app-86c47465fc-85gqz
	22f04d6299b3c       docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00                         5 minutes ago       Running             nginx                     0                   0df6c01c2dac8       nginx
	3abcb59b60ea8       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                   5 minutes ago       Running             headlamp                  0                   6ac45ca42c7f7       headlamp-68456f997b-tmsmw
	fa4ba5e754ff4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   a14f513ff363e       gcp-auth-5db96cd9b4-6cf67
	465cd4b11836e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   b435b439acd40       local-path-provisioner-8d985888d-l5zbg
	52e96316e67aa       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         7 minutes ago       Running             yakd                      0                   47f698f1ebf8e       yakd-dashboard-5ddbf7d777-pm765
	ba2ad193ac9f0       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   c90dcfa0d09b6       metrics-server-c59844bb4-8j5ls
	95bf199e010eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   39879541f96cd       storage-provisioner
	0c4fe6807851f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   ddca6ca50d59c       coredns-7db6d8ff4d-mq6st
	dbb511b719a84       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                        8 minutes ago       Running             kube-proxy                0                   204ea35171763       kube-proxy-twglf
	78b20ded1733c       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                        8 minutes ago       Running             kube-scheduler            0                   46199d25b6624       kube-scheduler-addons-807933
	01bce967d30ac       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                        8 minutes ago       Running             kube-controller-manager   0                   97fecee9f0945       kube-controller-manager-addons-807933
	0cc118a7deeb3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   328c8e8349f47       etcd-addons-807933
	f7dc6cd1195c0       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                        8 minutes ago       Running             kube-apiserver            0                   42ec4aa1d5e48       kube-apiserver-addons-807933
	
	
	==> coredns [0c4fe6807851f04973d6d1d7cc95032650e98732048bc4307bc37e8ed36ed2b2] <==
	[INFO] 10.244.0.8:59922 - 58655 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000781145s
	[INFO] 10.244.0.8:36732 - 43419 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100817s
	[INFO] 10.244.0.8:36732 - 53913 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000282971s
	[INFO] 10.244.0.8:40035 - 49290 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063091s
	[INFO] 10.244.0.8:40035 - 11636 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012652s
	[INFO] 10.244.0.8:52237 - 22264 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096401s
	[INFO] 10.244.0.8:52237 - 45305 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000251602s
	[INFO] 10.244.0.8:55243 - 11352 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000224547s
	[INFO] 10.244.0.8:55243 - 36955 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000449759s
	[INFO] 10.244.0.8:36413 - 40609 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106138s
	[INFO] 10.244.0.8:36413 - 63399 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000174993s
	[INFO] 10.244.0.8:58756 - 63130 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000054254s
	[INFO] 10.244.0.8:58756 - 42136 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057863s
	[INFO] 10.244.0.8:48740 - 462 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000062402s
	[INFO] 10.244.0.8:48740 - 39372 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000061194s
	[INFO] 10.244.0.22:35763 - 16855 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000369652s
	[INFO] 10.244.0.22:38764 - 5383 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000071741s
	[INFO] 10.244.0.22:45513 - 46688 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000085209s
	[INFO] 10.244.0.22:59596 - 25851 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000057536s
	[INFO] 10.244.0.22:33447 - 4009 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000062981s
	[INFO] 10.244.0.22:41135 - 51630 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072994s
	[INFO] 10.244.0.22:50421 - 38875 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000457828s
	[INFO] 10.244.0.22:59684 - 39361 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000414696s
	[INFO] 10.244.0.24:39961 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000566482s
	[INFO] 10.244.0.24:53873 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154189s
	
	
	==> describe nodes <==
	Name:               addons-807933
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-807933
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=addons-807933
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T10_22_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-807933
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:22:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-807933
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:30:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:27:55 +0000   Mon, 20 May 2024 10:22:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:27:55 +0000   Mon, 20 May 2024 10:22:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:27:55 +0000   Mon, 20 May 2024 10:22:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:27:55 +0000   Mon, 20 May 2024 10:22:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    addons-807933
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 f092bdd898a34a4aa0965a4ec711a781
	  System UUID:                f092bdd8-98a3-4a4a-a096-5a4ec711a781
	  Boot ID:                    8009c0da-3784-40d2-9986-64c33fdbb8fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-85gqz          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  gcp-auth                    gcp-auth-5db96cd9b4-6cf67                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  headlamp                    headlamp-68456f997b-tmsmw                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 coredns-7db6d8ff4d-mq6st                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m19s
	  kube-system                 etcd-addons-807933                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m33s
	  kube-system                 kube-apiserver-addons-807933              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m33s
	  kube-system                 kube-controller-manager-addons-807933     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m33s
	  kube-system                 kube-proxy-twglf                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m19s
	  kube-system                 kube-scheduler-addons-807933              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m33s
	  kube-system                 metrics-server-c59844bb4-8j5ls            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         8m14s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  local-path-storage          local-path-provisioner-8d985888d-l5zbg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-pm765           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     8m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m16s  kube-proxy       
	  Normal  Starting                 8m33s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m33s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m33s  kubelet          Node addons-807933 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m33s  kubelet          Node addons-807933 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m33s  kubelet          Node addons-807933 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m32s  kubelet          Node addons-807933 status is now: NodeReady
	  Normal  RegisteredNode           8m20s  node-controller  Node addons-807933 event: Registered Node addons-807933 in Controller
	
	
	==> dmesg <==
	[  +0.154950] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.148630] kauditd_printk_skb: 112 callbacks suppressed
	[  +5.272822] kauditd_printk_skb: 109 callbacks suppressed
	[  +6.280123] kauditd_printk_skb: 95 callbacks suppressed
	[May20 10:23] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.502298] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.411511] kauditd_printk_skb: 30 callbacks suppressed
	[ +19.875950] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.026579] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.671427] kauditd_printk_skb: 95 callbacks suppressed
	[  +5.427012] kauditd_printk_skb: 11 callbacks suppressed
	[May20 10:24] kauditd_printk_skb: 9 callbacks suppressed
	[ +17.038060] kauditd_printk_skb: 38 callbacks suppressed
	[ +41.312048] kauditd_printk_skb: 24 callbacks suppressed
	[May20 10:25] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.262235] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.501325] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.721401] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.791704] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.540994] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.057244] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.935191] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.245271] kauditd_printk_skb: 33 callbacks suppressed
	[May20 10:27] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.085116] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [0cc118a7deeb385af7c8c02997ad3c01f4ac7604711393a111cce06f09c0dc27] <==
	{"level":"warn","ts":"2024-05-20T10:23:53.15191Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:23:52.726059Z","time spent":"425.845476ms","remote":"127.0.0.1:47296","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85507,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2024-05-20T10:23:53.151527Z","caller":"traceutil/trace.go:171","msg":"trace[1190093925] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1128; }","duration":"425.338548ms","start":"2024-05-20T10:23:52.726167Z","end":"2024-05-20T10:23:53.151505Z","steps":["trace[1190093925] 'read index received'  (duration: 425.329168ms)","trace[1190093925] 'applied index is now lower than readState.Index'  (duration: 7.613µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:23:53.161209Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.973673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T10:23:53.161263Z","caller":"traceutil/trace.go:171","msg":"trace[577843866] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1098; }","duration":"290.0538ms","start":"2024-05-20T10:23:52.8712Z","end":"2024-05-20T10:23:53.161253Z","steps":["trace[577843866] 'agreement among raft nodes before linearized reading'  (duration: 289.969227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:23:53.161561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.337168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14357"}
	{"level":"info","ts":"2024-05-20T10:23:53.161611Z","caller":"traceutil/trace.go:171","msg":"trace[769988353] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1098; }","duration":"191.451691ms","start":"2024-05-20T10:23:52.970153Z","end":"2024-05-20T10:23:53.161604Z","steps":["trace[769988353] 'agreement among raft nodes before linearized reading'  (duration: 191.324227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:23:58.143712Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.950657ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T10:23:58.143785Z","caller":"traceutil/trace.go:171","msg":"trace[161434647] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1126; }","duration":"122.037686ms","start":"2024-05-20T10:23:58.021734Z","end":"2024-05-20T10:23:58.143771Z","steps":["trace[161434647] 'range keys from in-memory index tree'  (duration: 121.933856ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:24:01.317432Z","caller":"traceutil/trace.go:171","msg":"trace[2107161008] linearizableReadLoop","detail":"{readStateIndex:1183; appliedIndex:1182; }","duration":"129.731172ms","start":"2024-05-20T10:24:01.187688Z","end":"2024-05-20T10:24:01.317419Z","steps":["trace[2107161008] 'read index received'  (duration: 129.008165ms)","trace[2107161008] 'applied index is now lower than readState.Index'  (duration: 722.379µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T10:24:01.317675Z","caller":"traceutil/trace.go:171","msg":"trace[24168834] transaction","detail":"{read_only:false; response_revision:1151; number_of_response:1; }","duration":"157.849027ms","start":"2024-05-20T10:24:01.159815Z","end":"2024-05-20T10:24:01.317664Z","steps":["trace[24168834] 'process raft request'  (duration: 157.455086ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:24:01.317916Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.236108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-8j5ls\" ","response":"range_response_count:1 size:4456"}
	{"level":"info","ts":"2024-05-20T10:24:01.317957Z","caller":"traceutil/trace.go:171","msg":"trace[831994397] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-8j5ls; range_end:; response_count:1; response_revision:1151; }","duration":"130.349534ms","start":"2024-05-20T10:24:01.187601Z","end":"2024-05-20T10:24:01.317951Z","steps":["trace[831994397] 'agreement among raft nodes before linearized reading'  (duration: 130.196879ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:25:05.880936Z","caller":"traceutil/trace.go:171","msg":"trace[1692349095] linearizableReadLoop","detail":"{readStateIndex:1387; appliedIndex:1386; }","duration":"351.199363ms","start":"2024-05-20T10:25:05.529701Z","end":"2024-05-20T10:25:05.8809Z","steps":["trace[1692349095] 'read index received'  (duration: 351.037465ms)","trace[1692349095] 'applied index is now lower than readState.Index'  (duration: 161.473µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T10:25:05.88126Z","caller":"traceutil/trace.go:171","msg":"trace[2067711876] transaction","detail":"{read_only:false; response_revision:1339; number_of_response:1; }","duration":"414.913921ms","start":"2024-05-20T10:25:05.466333Z","end":"2024-05-20T10:25:05.881247Z","steps":["trace[2067711876] 'process raft request'  (duration: 414.465075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:25:05.881526Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:25:05.466254Z","time spent":"415.105962ms","remote":"127.0.0.1:47296","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1672,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/default/registry-test\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/registry-test\" value_size:1628 >> failure:<>"}
	{"level":"warn","ts":"2024-05-20T10:25:05.881783Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"352.076658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-05-20T10:25:05.881837Z","caller":"traceutil/trace.go:171","msg":"trace[937458996] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1339; }","duration":"352.194709ms","start":"2024-05-20T10:25:05.529633Z","end":"2024-05-20T10:25:05.881827Z","steps":["trace[937458996] 'agreement among raft nodes before linearized reading'  (duration: 352.048071ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:25:05.881879Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:25:05.529619Z","time spent":"352.254984ms","remote":"127.0.0.1:47264","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":844,"request content":"key:\"/registry/persistentvolumeclaims/default/hpvc\" "}
	{"level":"info","ts":"2024-05-20T10:25:20.841822Z","caller":"traceutil/trace.go:171","msg":"trace[204895821] linearizableReadLoop","detail":"{readStateIndex:1527; appliedIndex:1526; }","duration":"174.914308ms","start":"2024-05-20T10:25:20.666886Z","end":"2024-05-20T10:25:20.8418Z","steps":["trace[204895821] 'read index received'  (duration: 174.757908ms)","trace[204895821] 'applied index is now lower than readState.Index'  (duration: 155.983µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T10:25:20.842026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.114553ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5949"}
	{"level":"info","ts":"2024-05-20T10:25:20.842068Z","caller":"traceutil/trace.go:171","msg":"trace[749608842] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1470; }","duration":"175.200925ms","start":"2024-05-20T10:25:20.66686Z","end":"2024-05-20T10:25:20.842061Z","steps":["trace[749608842] 'agreement among raft nodes before linearized reading'  (duration: 175.049085ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:25:20.842334Z","caller":"traceutil/trace.go:171","msg":"trace[1811531730] transaction","detail":"{read_only:false; response_revision:1470; number_of_response:1; }","duration":"221.47184ms","start":"2024-05-20T10:25:20.620796Z","end":"2024-05-20T10:25:20.842268Z","steps":["trace[1811531730] 'process raft request'  (duration: 220.891499ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T10:25:27.697968Z","caller":"traceutil/trace.go:171","msg":"trace[683574811] transaction","detail":"{read_only:false; response_revision:1515; number_of_response:1; }","duration":"319.176593ms","start":"2024-05-20T10:25:27.378775Z","end":"2024-05-20T10:25:27.697952Z","steps":["trace[683574811] 'process raft request'  (duration: 318.865016ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:25:27.698085Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T10:25:27.378758Z","time spent":"319.251391ms","remote":"127.0.0.1:47368","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1477 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-05-20T10:25:59.297022Z","caller":"traceutil/trace.go:171","msg":"trace[762250467] transaction","detail":"{read_only:false; response_revision:1763; number_of_response:1; }","duration":"242.148368ms","start":"2024-05-20T10:25:59.05486Z","end":"2024-05-20T10:25:59.297008Z","steps":["trace[762250467] 'process raft request'  (duration: 242.059861ms)"],"step_count":1}
	
	
	==> gcp-auth [fa4ba5e754ff4a58ca20ee1cba03cfbe2b2e66f80284195ad40a28e381863ff3] <==
	2024/05/20 10:23:58 GCP Auth Webhook started!
	2024/05/20 10:25:00 Ready to marshal response ...
	2024/05/20 10:25:00 Ready to write response ...
	2024/05/20 10:25:00 Ready to marshal response ...
	2024/05/20 10:25:00 Ready to write response ...
	2024/05/20 10:25:00 Ready to marshal response ...
	2024/05/20 10:25:00 Ready to write response ...
	2024/05/20 10:25:05 Ready to marshal response ...
	2024/05/20 10:25:05 Ready to write response ...
	2024/05/20 10:25:07 Ready to marshal response ...
	2024/05/20 10:25:07 Ready to write response ...
	2024/05/20 10:25:15 Ready to marshal response ...
	2024/05/20 10:25:15 Ready to write response ...
	2024/05/20 10:25:18 Ready to marshal response ...
	2024/05/20 10:25:18 Ready to write response ...
	2024/05/20 10:25:23 Ready to marshal response ...
	2024/05/20 10:25:23 Ready to write response ...
	2024/05/20 10:25:24 Ready to marshal response ...
	2024/05/20 10:25:24 Ready to write response ...
	2024/05/20 10:25:38 Ready to marshal response ...
	2024/05/20 10:25:38 Ready to write response ...
	2024/05/20 10:25:40 Ready to marshal response ...
	2024/05/20 10:25:40 Ready to write response ...
	2024/05/20 10:27:38 Ready to marshal response ...
	2024/05/20 10:27:38 Ready to write response ...
	
	
	==> kernel <==
	 10:30:51 up 9 min,  0 users,  load average: 0.13, 0.45, 0.36
	Linux addons-807933 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f7dc6cd1195c01055f3a45be3fb51840275ffd4f800a19ef1cc20c5759c19f75] <==
	E0520 10:24:18.493787       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.157.213:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.157.213:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.157.213:443: connect: connection refused
	W0520 10:24:18.493957       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 10:24:18.494026       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0520 10:24:18.494605       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.157.213:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.157.213:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.157.213:443: connect: connection refused
	E0520 10:24:18.499670       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.157.213:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.157.213:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.157.213:443: connect: connection refused
	I0520 10:24:18.570760       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0520 10:25:00.881941       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.185.251"}
	I0520 10:25:15.719897       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0520 10:25:15.902223       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.32.238"}
	I0520 10:25:18.520816       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0520 10:25:19.561780       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0520 10:25:34.735214       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0520 10:25:55.066758       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:25:55.067246       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:25:55.095618       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:25:55.096050       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:25:55.129992       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:25:55.130155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:25:55.158256       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:25:55.158405       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0520 10:25:56.130377       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0520 10:25:56.158548       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0520 10:25:56.228507       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0520 10:27:38.948989       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.52.112"}
	
	
	==> kube-controller-manager [01bce967d30acaffabbda8c913f4d582e5ff41585bfba5807396323c8cad1136] <==
	W0520 10:29:05.092441       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:29:05.092570       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:29:10.466736       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:29:10.466985       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:29:11.403975       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:29:11.404083       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:29:14.316938       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:29:14.317070       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:29:41.316937       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:29:41.317043       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:29:43.476059       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:29:43.476128       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:29:53.868674       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:29:53.868777       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:30:05.174869       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:30:05.174972       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:30:27.609714       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:30:27.609814       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:30:28.811893       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:30:28.811947       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:30:36.412106       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:30:36.412196       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:30:40.433982       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:30:40.434068       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 10:30:49.880811       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="100.914µs"
	
	
	==> kube-proxy [dbb511b719a84d97f0481a8ae6d7b2eb56f6b6573f4a19f9d4fb64474b1dbaad] <==
	I0520 10:22:34.957793       1 server_linux.go:69] "Using iptables proxy"
	I0520 10:22:34.981559       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.86"]
	I0520 10:22:35.096339       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 10:22:35.096386       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 10:22:35.096402       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:22:35.127204       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:22:35.127454       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:22:35.127470       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:22:35.128962       1 config.go:192] "Starting service config controller"
	I0520 10:22:35.128998       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:22:35.129024       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:22:35.129028       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:22:35.129390       1 config.go:319] "Starting node config controller"
	I0520 10:22:35.129415       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:22:35.235397       1 shared_informer.go:320] Caches are synced for node config
	I0520 10:22:35.235465       1 shared_informer.go:320] Caches are synced for service config
	I0520 10:22:35.235497       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [78b20ded1733cce3ca0614825c3a8243c49f4fdc3165d51d5bdbf80d51219b0e] <==
	E0520 10:22:16.290840       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 10:22:16.290854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 10:22:16.290861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 10:22:16.290869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 10:22:16.290875       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 10:22:16.290887       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:22:16.290911       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:22:16.290727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:22:16.291138       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:22:16.291170       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 10:22:16.291044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 10:22:16.291250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 10:22:17.091875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 10:22:17.091907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 10:22:17.151236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:22:17.151269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:22:17.292437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:22:17.292485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:22:17.305855       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 10:22:17.305904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 10:22:17.503401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:22:17.503525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 10:22:17.545707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:22:17.545757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0520 10:22:17.887147       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 10:27:44 addons-807933 kubelet[1277]: I0520 10:27:44.965521    1277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2"} err="failed to get container status \"67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2\": rpc error: code = NotFound desc = could not find container \"67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2\": container with ID starting with 67b8cb1043b1f390859af492c67e2b3db80bfcb66cd0ed5712aa2318ecc16bf2 not found: ID does not exist"
	May 20 10:28:18 addons-807933 kubelet[1277]: E0520 10:28:18.694349    1277 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:28:18 addons-807933 kubelet[1277]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:28:18 addons-807933 kubelet[1277]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:28:18 addons-807933 kubelet[1277]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:28:18 addons-807933 kubelet[1277]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:28:19 addons-807933 kubelet[1277]: I0520 10:28:19.543945    1277 scope.go:117] "RemoveContainer" containerID="453ad985f33aab2992e37f69d0d95974ea7139517a2e4b5745178e777ab74da2"
	May 20 10:28:19 addons-807933 kubelet[1277]: I0520 10:28:19.566679    1277 scope.go:117] "RemoveContainer" containerID="e7f7b3afbcb9757686e16fef0788c036f791930adf5b30122c2f0909f03f5054"
	May 20 10:29:18 addons-807933 kubelet[1277]: E0520 10:29:18.694921    1277 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:29:18 addons-807933 kubelet[1277]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:29:18 addons-807933 kubelet[1277]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:29:18 addons-807933 kubelet[1277]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:29:18 addons-807933 kubelet[1277]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:30:18 addons-807933 kubelet[1277]: E0520 10:30:18.694101    1277 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:30:18 addons-807933 kubelet[1277]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:30:18 addons-807933 kubelet[1277]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:30:18 addons-807933 kubelet[1277]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:30:18 addons-807933 kubelet[1277]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:30:49 addons-807933 kubelet[1277]: I0520 10:30:49.916434    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-85gqz" podStartSLOduration=188.906830584 podStartE2EDuration="3m11.916402581s" podCreationTimestamp="2024-05-20 10:27:38 +0000 UTC" firstStartedPulling="2024-05-20 10:27:39.381199333 +0000 UTC m=+320.897261030" lastFinishedPulling="2024-05-20 10:27:42.390771327 +0000 UTC m=+323.906833027" observedRunningTime="2024-05-20 10:27:42.951323552 +0000 UTC m=+324.467385268" watchObservedRunningTime="2024-05-20 10:30:49.916402581 +0000 UTC m=+511.432464298"
	May 20 10:30:51 addons-807933 kubelet[1277]: I0520 10:30:51.445548    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bcafd302-7969-429c-b306-290348b05d85-tmp-dir\") pod \"bcafd302-7969-429c-b306-290348b05d85\" (UID: \"bcafd302-7969-429c-b306-290348b05d85\") "
	May 20 10:30:51 addons-807933 kubelet[1277]: I0520 10:30:51.445616    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z9w7\" (UniqueName: \"kubernetes.io/projected/bcafd302-7969-429c-b306-290348b05d85-kube-api-access-4z9w7\") pod \"bcafd302-7969-429c-b306-290348b05d85\" (UID: \"bcafd302-7969-429c-b306-290348b05d85\") "
	May 20 10:30:51 addons-807933 kubelet[1277]: I0520 10:30:51.446728    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcafd302-7969-429c-b306-290348b05d85-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "bcafd302-7969-429c-b306-290348b05d85" (UID: "bcafd302-7969-429c-b306-290348b05d85"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	May 20 10:30:51 addons-807933 kubelet[1277]: I0520 10:30:51.452999    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcafd302-7969-429c-b306-290348b05d85-kube-api-access-4z9w7" (OuterVolumeSpecName: "kube-api-access-4z9w7") pod "bcafd302-7969-429c-b306-290348b05d85" (UID: "bcafd302-7969-429c-b306-290348b05d85"). InnerVolumeSpecName "kube-api-access-4z9w7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 10:30:51 addons-807933 kubelet[1277]: I0520 10:30:51.546225    1277 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bcafd302-7969-429c-b306-290348b05d85-tmp-dir\") on node \"addons-807933\" DevicePath \"\""
	May 20 10:30:51 addons-807933 kubelet[1277]: I0520 10:30:51.546352    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4z9w7\" (UniqueName: \"kubernetes.io/projected/bcafd302-7969-429c-b306-290348b05d85-kube-api-access-4z9w7\") on node \"addons-807933\" DevicePath \"\""
	
	
	==> storage-provisioner [95bf199e010eb3aaef286a369bfab11ebacdf0fa8aafe96c6f2bdd660c9a4c88] <==
	I0520 10:22:40.504688       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 10:22:40.654568       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 10:22:40.654823       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 10:22:40.698770       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 10:22:40.698904       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-807933_44e4357c-0f68-4b46-b0f9-76a01654c97f!
	I0520 10:22:40.699527       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"019bf97e-7887-4eaa-92b6-0d457976fd1e", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-807933_44e4357c-0f68-4b46-b0f9-76a01654c97f became leader
	I0520 10:22:40.900733       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-807933_44e4357c-0f68-4b46-b0f9-76a01654c97f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-807933 -n addons-807933
helpers_test.go:261: (dbg) Run:  kubectl --context addons-807933 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (340.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-807933
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-807933: exit status 82 (2m0.457489683s)

                                                
                                                
-- stdout --
	* Stopping node "addons-807933"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-807933" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-807933
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-807933: exit status 11 (21.561374735s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.86:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-807933" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-807933
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-807933: exit status 11 (6.143243474s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.86:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-807933" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-807933
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-807933: exit status 11 (6.143888749s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.86:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-807933" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.31s)

                                                
                                    
x
+
TestErrorSpam/setup (42.67s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-141390 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-141390 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-141390 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-141390 --driver=kvm2  --container-runtime=crio: (42.672303183s)
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1"
error_spam_test.go:110: minikube stdout:
* [nospam-141390] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=18925
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting "nospam-141390" primary control-plane node in "nospam-141390" cluster
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-141390" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
--- FAIL: TestErrorSpam/setup (42.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 node stop m02 -v=7 --alsologtostderr
E0520 10:43:41.114004   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:44:54.310132   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:45:03.034688   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.461608373s)

                                                
                                                
-- stdout --
	* Stopping node "ha-570850-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:43:39.376714   26304 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:43:39.377012   26304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:43:39.377022   26304 out.go:304] Setting ErrFile to fd 2...
	I0520 10:43:39.377026   26304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:43:39.377207   26304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:43:39.377454   26304 mustload.go:65] Loading cluster: ha-570850
	I0520 10:43:39.377785   26304 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:43:39.377800   26304 stop.go:39] StopHost: ha-570850-m02
	I0520 10:43:39.378207   26304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:43:39.378255   26304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:43:39.392798   26304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45929
	I0520 10:43:39.393251   26304 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:43:39.393757   26304 main.go:141] libmachine: Using API Version  1
	I0520 10:43:39.393783   26304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:43:39.394188   26304 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:43:39.396571   26304 out.go:177] * Stopping node "ha-570850-m02"  ...
	I0520 10:43:39.397889   26304 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 10:43:39.397922   26304 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:43:39.398136   26304 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 10:43:39.398169   26304 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:43:39.400882   26304 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:43:39.401273   26304 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:43:39.401304   26304 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:43:39.401410   26304 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:43:39.401570   26304 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:43:39.401746   26304 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:43:39.401876   26304 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	I0520 10:43:39.493469   26304 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 10:43:39.548566   26304 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 10:43:39.603471   26304 main.go:141] libmachine: Stopping "ha-570850-m02"...
	I0520 10:43:39.603500   26304 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:43:39.605150   26304 main.go:141] libmachine: (ha-570850-m02) Calling .Stop
	I0520 10:43:39.608844   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 0/120
	I0520 10:43:40.610292   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 1/120
	I0520 10:43:41.611523   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 2/120
	I0520 10:43:42.612896   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 3/120
	I0520 10:43:43.614183   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 4/120
	I0520 10:43:44.616212   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 5/120
	I0520 10:43:45.617395   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 6/120
	I0520 10:43:46.619789   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 7/120
	I0520 10:43:47.622112   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 8/120
	I0520 10:43:48.623472   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 9/120
	I0520 10:43:49.625215   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 10/120
	I0520 10:43:50.627346   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 11/120
	I0520 10:43:51.628624   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 12/120
	I0520 10:43:52.629838   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 13/120
	I0520 10:43:53.631877   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 14/120
	I0520 10:43:54.633950   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 15/120
	I0520 10:43:55.636089   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 16/120
	I0520 10:43:56.637347   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 17/120
	I0520 10:43:57.639334   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 18/120
	I0520 10:43:58.641171   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 19/120
	I0520 10:43:59.643269   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 20/120
	I0520 10:44:00.644583   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 21/120
	I0520 10:44:01.645955   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 22/120
	I0520 10:44:02.647224   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 23/120
	I0520 10:44:03.648616   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 24/120
	I0520 10:44:04.650000   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 25/120
	I0520 10:44:05.651376   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 26/120
	I0520 10:44:06.652814   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 27/120
	I0520 10:44:07.654056   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 28/120
	I0520 10:44:08.656139   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 29/120
	I0520 10:44:09.658118   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 30/120
	I0520 10:44:10.659422   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 31/120
	I0520 10:44:11.660718   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 32/120
	I0520 10:44:12.662256   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 33/120
	I0520 10:44:13.663943   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 34/120
	I0520 10:44:14.666107   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 35/120
	I0520 10:44:15.667559   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 36/120
	I0520 10:44:16.668993   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 37/120
	I0520 10:44:17.670541   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 38/120
	I0520 10:44:18.672003   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 39/120
	I0520 10:44:19.674183   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 40/120
	I0520 10:44:20.675316   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 41/120
	I0520 10:44:21.677370   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 42/120
	I0520 10:44:22.678801   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 43/120
	I0520 10:44:23.680346   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 44/120
	I0520 10:44:24.682385   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 45/120
	I0520 10:44:25.683659   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 46/120
	I0520 10:44:26.685382   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 47/120
	I0520 10:44:27.686746   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 48/120
	I0520 10:44:28.687949   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 49/120
	I0520 10:44:29.690017   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 50/120
	I0520 10:44:30.691310   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 51/120
	I0520 10:44:31.692686   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 52/120
	I0520 10:44:32.694135   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 53/120
	I0520 10:44:33.695383   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 54/120
	I0520 10:44:34.697255   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 55/120
	I0520 10:44:35.699454   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 56/120
	I0520 10:44:36.700666   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 57/120
	I0520 10:44:37.701910   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 58/120
	I0520 10:44:38.703255   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 59/120
	I0520 10:44:39.705236   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 60/120
	I0520 10:44:40.706796   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 61/120
	I0520 10:44:41.709143   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 62/120
	I0520 10:44:42.711258   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 63/120
	I0520 10:44:43.713577   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 64/120
	I0520 10:44:44.714978   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 65/120
	I0520 10:44:45.716267   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 66/120
	I0520 10:44:46.717527   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 67/120
	I0520 10:44:47.718776   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 68/120
	I0520 10:44:48.720161   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 69/120
	I0520 10:44:49.722343   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 70/120
	I0520 10:44:50.723521   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 71/120
	I0520 10:44:51.724872   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 72/120
	I0520 10:44:52.726258   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 73/120
	I0520 10:44:53.727565   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 74/120
	I0520 10:44:54.729161   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 75/120
	I0520 10:44:55.730359   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 76/120
	I0520 10:44:56.731936   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 77/120
	I0520 10:44:57.733311   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 78/120
	I0520 10:44:58.734567   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 79/120
	I0520 10:44:59.736704   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 80/120
	I0520 10:45:00.739112   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 81/120
	I0520 10:45:01.740756   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 82/120
	I0520 10:45:02.742055   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 83/120
	I0520 10:45:03.743280   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 84/120
	I0520 10:45:04.744742   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 85/120
	I0520 10:45:05.746011   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 86/120
	I0520 10:45:06.747192   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 87/120
	I0520 10:45:07.748869   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 88/120
	I0520 10:45:08.750079   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 89/120
	I0520 10:45:09.751993   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 90/120
	I0520 10:45:10.753886   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 91/120
	I0520 10:45:11.755442   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 92/120
	I0520 10:45:12.756721   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 93/120
	I0520 10:45:13.757894   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 94/120
	I0520 10:45:14.759894   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 95/120
	I0520 10:45:15.761812   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 96/120
	I0520 10:45:16.763060   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 97/120
	I0520 10:45:17.764400   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 98/120
	I0520 10:45:18.765714   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 99/120
	I0520 10:45:19.767942   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 100/120
	I0520 10:45:20.769324   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 101/120
	I0520 10:45:21.770926   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 102/120
	I0520 10:45:22.772193   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 103/120
	I0520 10:45:23.773551   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 104/120
	I0520 10:45:24.775559   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 105/120
	I0520 10:45:25.776906   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 106/120
	I0520 10:45:26.778229   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 107/120
	I0520 10:45:27.779439   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 108/120
	I0520 10:45:28.781611   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 109/120
	I0520 10:45:29.783511   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 110/120
	I0520 10:45:30.784800   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 111/120
	I0520 10:45:31.786539   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 112/120
	I0520 10:45:32.787929   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 113/120
	I0520 10:45:33.789112   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 114/120
	I0520 10:45:34.790575   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 115/120
	I0520 10:45:35.791789   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 116/120
	I0520 10:45:36.793264   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 117/120
	I0520 10:45:37.794585   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 118/120
	I0520 10:45:38.795967   26304 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 119/120
	I0520 10:45:39.797112   26304 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0520 10:45:39.797229   26304 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-570850 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr: exit status 3 (19.243047992s)

                                                
                                                
-- stdout --
	ha-570850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-570850-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:45:39.839472   26730 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:45:39.839759   26730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:45:39.839769   26730 out.go:304] Setting ErrFile to fd 2...
	I0520 10:45:39.839774   26730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:45:39.839933   26730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:45:39.840092   26730 out.go:298] Setting JSON to false
	I0520 10:45:39.840116   26730 mustload.go:65] Loading cluster: ha-570850
	I0520 10:45:39.840218   26730 notify.go:220] Checking for updates...
	I0520 10:45:39.840512   26730 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:45:39.840529   26730 status.go:255] checking status of ha-570850 ...
	I0520 10:45:39.841001   26730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:45:39.841073   26730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:45:39.858937   26730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46071
	I0520 10:45:39.859435   26730 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:45:39.859912   26730 main.go:141] libmachine: Using API Version  1
	I0520 10:45:39.859931   26730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:45:39.860349   26730 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:45:39.860568   26730 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:45:39.862169   26730 status.go:330] ha-570850 host status = "Running" (err=<nil>)
	I0520 10:45:39.862183   26730 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:45:39.862520   26730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:45:39.862559   26730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:45:39.876738   26730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41419
	I0520 10:45:39.877155   26730 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:45:39.877598   26730 main.go:141] libmachine: Using API Version  1
	I0520 10:45:39.877635   26730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:45:39.877952   26730 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:45:39.878141   26730 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:45:39.880835   26730 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:45:39.881236   26730 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:45:39.881269   26730 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:45:39.881363   26730 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:45:39.881669   26730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:45:39.881704   26730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:45:39.895669   26730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I0520 10:45:39.896013   26730 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:45:39.896405   26730 main.go:141] libmachine: Using API Version  1
	I0520 10:45:39.896425   26730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:45:39.896703   26730 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:45:39.896885   26730 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:45:39.897068   26730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:45:39.897088   26730 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:45:39.899386   26730 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:45:39.899735   26730 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:45:39.899774   26730 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:45:39.899892   26730 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:45:39.900038   26730 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:45:39.900232   26730 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:45:39.900382   26730 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:45:39.987228   26730 ssh_runner.go:195] Run: systemctl --version
	I0520 10:45:39.994277   26730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:45:40.013392   26730 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:45:40.013429   26730 api_server.go:166] Checking apiserver status ...
	I0520 10:45:40.013471   26730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:45:40.030098   26730 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0520 10:45:40.039748   26730 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:45:40.039795   26730 ssh_runner.go:195] Run: ls
	I0520 10:45:40.044353   26730 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:45:40.050425   26730 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:45:40.050451   26730 status.go:422] ha-570850 apiserver status = Running (err=<nil>)
	I0520 10:45:40.050463   26730 status.go:257] ha-570850 status: &{Name:ha-570850 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:45:40.050483   26730 status.go:255] checking status of ha-570850-m02 ...
	I0520 10:45:40.050796   26730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:45:40.050845   26730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:45:40.065457   26730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
	I0520 10:45:40.065821   26730 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:45:40.066279   26730 main.go:141] libmachine: Using API Version  1
	I0520 10:45:40.066300   26730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:45:40.066612   26730 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:45:40.066794   26730 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:45:40.068412   26730 status.go:330] ha-570850-m02 host status = "Running" (err=<nil>)
	I0520 10:45:40.068426   26730 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:45:40.068714   26730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:45:40.068754   26730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:45:40.083323   26730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43071
	I0520 10:45:40.083686   26730 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:45:40.084134   26730 main.go:141] libmachine: Using API Version  1
	I0520 10:45:40.084167   26730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:45:40.084483   26730 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:45:40.084677   26730 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:45:40.087341   26730 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:45:40.087840   26730 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:45:40.087866   26730 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:45:40.088059   26730 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:45:40.088401   26730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:45:40.088433   26730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:45:40.102653   26730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I0520 10:45:40.102996   26730 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:45:40.103395   26730 main.go:141] libmachine: Using API Version  1
	I0520 10:45:40.103413   26730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:45:40.103688   26730 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:45:40.103877   26730 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:45:40.104042   26730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:45:40.104063   26730 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:45:40.106630   26730 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:45:40.107055   26730 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:45:40.107092   26730 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:45:40.107265   26730 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:45:40.107410   26730 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:45:40.107572   26730 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:45:40.107728   26730 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	W0520 10:45:58.669098   26730 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0520 10:45:58.669177   26730 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0520 10:45:58.669193   26730 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:45:58.669200   26730 status.go:257] ha-570850-m02 status: &{Name:ha-570850-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 10:45:58.669216   26730 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:45:58.669226   26730 status.go:255] checking status of ha-570850-m03 ...
	I0520 10:45:58.669606   26730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:45:58.669657   26730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:45:58.684438   26730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40537
	I0520 10:45:58.684846   26730 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:45:58.685356   26730 main.go:141] libmachine: Using API Version  1
	I0520 10:45:58.685381   26730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:45:58.685682   26730 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:45:58.685889   26730 main.go:141] libmachine: (ha-570850-m03) Calling .GetState
	I0520 10:45:58.687310   26730 status.go:330] ha-570850-m03 host status = "Running" (err=<nil>)
	I0520 10:45:58.687328   26730 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:45:58.687605   26730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:45:58.687638   26730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:45:58.702300   26730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36005
	I0520 10:45:58.702668   26730 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:45:58.703042   26730 main.go:141] libmachine: Using API Version  1
	I0520 10:45:58.703060   26730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:45:58.703381   26730 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:45:58.703567   26730 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:45:58.706124   26730 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:45:58.706504   26730 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:45:58.706517   26730 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:45:58.706649   26730 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:45:58.706937   26730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:45:58.706971   26730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:45:58.722353   26730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I0520 10:45:58.722757   26730 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:45:58.723267   26730 main.go:141] libmachine: Using API Version  1
	I0520 10:45:58.723288   26730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:45:58.723596   26730 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:45:58.723789   26730 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:45:58.723979   26730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:45:58.723998   26730 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:45:58.726862   26730 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:45:58.727272   26730 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:45:58.727296   26730 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:45:58.727430   26730 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:45:58.727604   26730 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:45:58.727754   26730 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:45:58.727894   26730 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:45:58.818012   26730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:45:58.842394   26730 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:45:58.842423   26730 api_server.go:166] Checking apiserver status ...
	I0520 10:45:58.842452   26730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:45:58.859918   26730 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0520 10:45:58.868687   26730 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:45:58.868723   26730 ssh_runner.go:195] Run: ls
	I0520 10:45:58.873179   26730 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:45:58.877295   26730 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:45:58.877315   26730 status.go:422] ha-570850-m03 apiserver status = Running (err=<nil>)
	I0520 10:45:58.877325   26730 status.go:257] ha-570850-m03 status: &{Name:ha-570850-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:45:58.877351   26730 status.go:255] checking status of ha-570850-m04 ...
	I0520 10:45:58.877668   26730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:45:58.877715   26730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:45:58.892276   26730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35933
	I0520 10:45:58.892757   26730 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:45:58.893341   26730 main.go:141] libmachine: Using API Version  1
	I0520 10:45:58.893367   26730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:45:58.893654   26730 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:45:58.893847   26730 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:45:58.895316   26730 status.go:330] ha-570850-m04 host status = "Running" (err=<nil>)
	I0520 10:45:58.895334   26730 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:45:58.895767   26730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:45:58.895811   26730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:45:58.910748   26730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0520 10:45:58.911120   26730 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:45:58.911538   26730 main.go:141] libmachine: Using API Version  1
	I0520 10:45:58.911565   26730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:45:58.911891   26730 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:45:58.912073   26730 main.go:141] libmachine: (ha-570850-m04) Calling .GetIP
	I0520 10:45:58.914868   26730 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:45:58.915271   26730 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:45:58.915293   26730 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:45:58.915418   26730 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:45:58.915724   26730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:45:58.915764   26730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:45:58.930304   26730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36559
	I0520 10:45:58.930654   26730 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:45:58.931121   26730 main.go:141] libmachine: Using API Version  1
	I0520 10:45:58.931139   26730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:45:58.931433   26730 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:45:58.931612   26730 main.go:141] libmachine: (ha-570850-m04) Calling .DriverName
	I0520 10:45:58.931787   26730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:45:58.931807   26730 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHHostname
	I0520 10:45:58.934615   26730 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:45:58.935028   26730 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:45:58.935051   26730 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:45:58.935213   26730 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHPort
	I0520 10:45:58.935393   26730 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHKeyPath
	I0520 10:45:58.935530   26730 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHUsername
	I0520 10:45:58.935659   26730 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m04/id_rsa Username:docker}
	I0520 10:45:59.018160   26730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:45:59.039439   26730 status.go:257] ha-570850-m04 status: &{Name:ha-570850-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-570850 -n ha-570850
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-570850 logs -n 25: (1.440112119s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2002648180/001/cp-test_ha-570850-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850:/home/docker/cp-test_ha-570850-m03_ha-570850.txt                       |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850 sudo cat                                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m03_ha-570850.txt                                 |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m02:/home/docker/cp-test_ha-570850-m03_ha-570850-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m02 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m03_ha-570850-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04:/home/docker/cp-test_ha-570850-m03_ha-570850-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m04 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m03_ha-570850-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp testdata/cp-test.txt                                                | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2002648180/001/cp-test_ha-570850-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850:/home/docker/cp-test_ha-570850-m04_ha-570850.txt                       |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850 sudo cat                                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850.txt                                 |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m02:/home/docker/cp-test_ha-570850-m04_ha-570850-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m02 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03:/home/docker/cp-test_ha-570850-m04_ha-570850-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m03 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-570850 node stop m02 -v=7                                                     | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:38:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:38:11.730169   22116 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:38:11.730390   22116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:38:11.730398   22116 out.go:304] Setting ErrFile to fd 2...
	I0520 10:38:11.730402   22116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:38:11.730560   22116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:38:11.731084   22116 out.go:298] Setting JSON to false
	I0520 10:38:11.731809   22116 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1235,"bootTime":1716200257,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 10:38:11.731860   22116 start.go:139] virtualization: kvm guest
	I0520 10:38:11.733921   22116 out.go:177] * [ha-570850] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 10:38:11.735658   22116 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:38:11.737019   22116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:38:11.735634   22116 notify.go:220] Checking for updates...
	I0520 10:38:11.739319   22116 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:38:11.740477   22116 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:38:11.741867   22116 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 10:38:11.743183   22116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:38:11.744547   22116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:38:11.779115   22116 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 10:38:11.780420   22116 start.go:297] selected driver: kvm2
	I0520 10:38:11.780431   22116 start.go:901] validating driver "kvm2" against <nil>
	I0520 10:38:11.780441   22116 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:38:11.781080   22116 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:38:11.781145   22116 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 10:38:11.795133   22116 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 10:38:11.795167   22116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:38:11.795348   22116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:38:11.795372   22116 cni.go:84] Creating CNI manager for ""
	I0520 10:38:11.795378   22116 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 10:38:11.795384   22116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 10:38:11.795428   22116 start.go:340] cluster config:
	{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0520 10:38:11.795505   22116 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:38:11.797209   22116 out.go:177] * Starting "ha-570850" primary control-plane node in "ha-570850" cluster
	I0520 10:38:11.798340   22116 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:38:11.798364   22116 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 10:38:11.798370   22116 cache.go:56] Caching tarball of preloaded images
	I0520 10:38:11.798449   22116 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 10:38:11.798469   22116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:38:11.798736   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:38:11.798754   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json: {Name:mk35df54ed8a47044341ed5a7e95083f964ddf72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:11.798885   22116 start.go:360] acquireMachinesLock for ha-570850: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 10:38:11.798917   22116 start.go:364] duration metric: took 17.621µs to acquireMachinesLock for "ha-570850"
	I0520 10:38:11.798939   22116 start.go:93] Provisioning new machine with config: &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:38:11.798992   22116 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 10:38:11.800532   22116 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 10:38:11.800653   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:38:11.800703   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:38:11.813982   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0520 10:38:11.814361   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:38:11.814827   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:38:11.814854   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:38:11.815138   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:38:11.815335   22116 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:38:11.815461   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:11.815593   22116 start.go:159] libmachine.API.Create for "ha-570850" (driver="kvm2")
	I0520 10:38:11.815630   22116 client.go:168] LocalClient.Create starting
	I0520 10:38:11.815663   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 10:38:11.815699   22116 main.go:141] libmachine: Decoding PEM data...
	I0520 10:38:11.815718   22116 main.go:141] libmachine: Parsing certificate...
	I0520 10:38:11.815784   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 10:38:11.815815   22116 main.go:141] libmachine: Decoding PEM data...
	I0520 10:38:11.815835   22116 main.go:141] libmachine: Parsing certificate...
	I0520 10:38:11.815866   22116 main.go:141] libmachine: Running pre-create checks...
	I0520 10:38:11.815879   22116 main.go:141] libmachine: (ha-570850) Calling .PreCreateCheck
	I0520 10:38:11.816181   22116 main.go:141] libmachine: (ha-570850) Calling .GetConfigRaw
	I0520 10:38:11.816528   22116 main.go:141] libmachine: Creating machine...
	I0520 10:38:11.816542   22116 main.go:141] libmachine: (ha-570850) Calling .Create
	I0520 10:38:11.816684   22116 main.go:141] libmachine: (ha-570850) Creating KVM machine...
	I0520 10:38:11.817947   22116 main.go:141] libmachine: (ha-570850) DBG | found existing default KVM network
	I0520 10:38:11.818627   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:11.818492   22138 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0520 10:38:11.818659   22116 main.go:141] libmachine: (ha-570850) DBG | created network xml: 
	I0520 10:38:11.818679   22116 main.go:141] libmachine: (ha-570850) DBG | <network>
	I0520 10:38:11.818690   22116 main.go:141] libmachine: (ha-570850) DBG |   <name>mk-ha-570850</name>
	I0520 10:38:11.818701   22116 main.go:141] libmachine: (ha-570850) DBG |   <dns enable='no'/>
	I0520 10:38:11.818714   22116 main.go:141] libmachine: (ha-570850) DBG |   
	I0520 10:38:11.818724   22116 main.go:141] libmachine: (ha-570850) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 10:38:11.818737   22116 main.go:141] libmachine: (ha-570850) DBG |     <dhcp>
	I0520 10:38:11.818750   22116 main.go:141] libmachine: (ha-570850) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 10:38:11.818762   22116 main.go:141] libmachine: (ha-570850) DBG |     </dhcp>
	I0520 10:38:11.818775   22116 main.go:141] libmachine: (ha-570850) DBG |   </ip>
	I0520 10:38:11.818787   22116 main.go:141] libmachine: (ha-570850) DBG |   
	I0520 10:38:11.818797   22116 main.go:141] libmachine: (ha-570850) DBG | </network>
	I0520 10:38:11.818810   22116 main.go:141] libmachine: (ha-570850) DBG | 
	I0520 10:38:11.823649   22116 main.go:141] libmachine: (ha-570850) DBG | trying to create private KVM network mk-ha-570850 192.168.39.0/24...
	I0520 10:38:11.884902   22116 main.go:141] libmachine: (ha-570850) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850 ...
	I0520 10:38:11.884949   22116 main.go:141] libmachine: (ha-570850) DBG | private KVM network mk-ha-570850 192.168.39.0/24 created
	I0520 10:38:11.884999   22116 main.go:141] libmachine: (ha-570850) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 10:38:11.885064   22116 main.go:141] libmachine: (ha-570850) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 10:38:11.885109   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:11.884856   22138 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:38:12.109817   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:12.109638   22138 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa...
	I0520 10:38:12.296359   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:12.296233   22138 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/ha-570850.rawdisk...
	I0520 10:38:12.296396   22116 main.go:141] libmachine: (ha-570850) DBG | Writing magic tar header
	I0520 10:38:12.296407   22116 main.go:141] libmachine: (ha-570850) DBG | Writing SSH key tar header
	I0520 10:38:12.296416   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:12.296339   22138 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850 ...
	I0520 10:38:12.296455   22116 main.go:141] libmachine: (ha-570850) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850 (perms=drwx------)
	I0520 10:38:12.296473   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850
	I0520 10:38:12.296482   22116 main.go:141] libmachine: (ha-570850) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 10:38:12.296512   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 10:38:12.296552   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:38:12.296569   22116 main.go:141] libmachine: (ha-570850) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 10:38:12.296587   22116 main.go:141] libmachine: (ha-570850) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 10:38:12.296601   22116 main.go:141] libmachine: (ha-570850) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 10:38:12.296619   22116 main.go:141] libmachine: (ha-570850) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 10:38:12.296638   22116 main.go:141] libmachine: (ha-570850) Creating domain...
	I0520 10:38:12.296654   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 10:38:12.296674   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 10:38:12.296686   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home/jenkins
	I0520 10:38:12.296708   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home
	I0520 10:38:12.296720   22116 main.go:141] libmachine: (ha-570850) DBG | Skipping /home - not owner
	I0520 10:38:12.297582   22116 main.go:141] libmachine: (ha-570850) define libvirt domain using xml: 
	I0520 10:38:12.297595   22116 main.go:141] libmachine: (ha-570850) <domain type='kvm'>
	I0520 10:38:12.297600   22116 main.go:141] libmachine: (ha-570850)   <name>ha-570850</name>
	I0520 10:38:12.297605   22116 main.go:141] libmachine: (ha-570850)   <memory unit='MiB'>2200</memory>
	I0520 10:38:12.297611   22116 main.go:141] libmachine: (ha-570850)   <vcpu>2</vcpu>
	I0520 10:38:12.297617   22116 main.go:141] libmachine: (ha-570850)   <features>
	I0520 10:38:12.297625   22116 main.go:141] libmachine: (ha-570850)     <acpi/>
	I0520 10:38:12.297636   22116 main.go:141] libmachine: (ha-570850)     <apic/>
	I0520 10:38:12.297644   22116 main.go:141] libmachine: (ha-570850)     <pae/>
	I0520 10:38:12.297659   22116 main.go:141] libmachine: (ha-570850)     
	I0520 10:38:12.297688   22116 main.go:141] libmachine: (ha-570850)   </features>
	I0520 10:38:12.297706   22116 main.go:141] libmachine: (ha-570850)   <cpu mode='host-passthrough'>
	I0520 10:38:12.297712   22116 main.go:141] libmachine: (ha-570850)   
	I0520 10:38:12.297717   22116 main.go:141] libmachine: (ha-570850)   </cpu>
	I0520 10:38:12.297724   22116 main.go:141] libmachine: (ha-570850)   <os>
	I0520 10:38:12.297729   22116 main.go:141] libmachine: (ha-570850)     <type>hvm</type>
	I0520 10:38:12.297735   22116 main.go:141] libmachine: (ha-570850)     <boot dev='cdrom'/>
	I0520 10:38:12.297740   22116 main.go:141] libmachine: (ha-570850)     <boot dev='hd'/>
	I0520 10:38:12.297745   22116 main.go:141] libmachine: (ha-570850)     <bootmenu enable='no'/>
	I0520 10:38:12.297749   22116 main.go:141] libmachine: (ha-570850)   </os>
	I0520 10:38:12.297792   22116 main.go:141] libmachine: (ha-570850)   <devices>
	I0520 10:38:12.297809   22116 main.go:141] libmachine: (ha-570850)     <disk type='file' device='cdrom'>
	I0520 10:38:12.297825   22116 main.go:141] libmachine: (ha-570850)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/boot2docker.iso'/>
	I0520 10:38:12.297836   22116 main.go:141] libmachine: (ha-570850)       <target dev='hdc' bus='scsi'/>
	I0520 10:38:12.297860   22116 main.go:141] libmachine: (ha-570850)       <readonly/>
	I0520 10:38:12.297880   22116 main.go:141] libmachine: (ha-570850)     </disk>
	I0520 10:38:12.297901   22116 main.go:141] libmachine: (ha-570850)     <disk type='file' device='disk'>
	I0520 10:38:12.297920   22116 main.go:141] libmachine: (ha-570850)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 10:38:12.297938   22116 main.go:141] libmachine: (ha-570850)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/ha-570850.rawdisk'/>
	I0520 10:38:12.297949   22116 main.go:141] libmachine: (ha-570850)       <target dev='hda' bus='virtio'/>
	I0520 10:38:12.297959   22116 main.go:141] libmachine: (ha-570850)     </disk>
	I0520 10:38:12.297970   22116 main.go:141] libmachine: (ha-570850)     <interface type='network'>
	I0520 10:38:12.297981   22116 main.go:141] libmachine: (ha-570850)       <source network='mk-ha-570850'/>
	I0520 10:38:12.297995   22116 main.go:141] libmachine: (ha-570850)       <model type='virtio'/>
	I0520 10:38:12.298004   22116 main.go:141] libmachine: (ha-570850)     </interface>
	I0520 10:38:12.298009   22116 main.go:141] libmachine: (ha-570850)     <interface type='network'>
	I0520 10:38:12.298017   22116 main.go:141] libmachine: (ha-570850)       <source network='default'/>
	I0520 10:38:12.298023   22116 main.go:141] libmachine: (ha-570850)       <model type='virtio'/>
	I0520 10:38:12.298030   22116 main.go:141] libmachine: (ha-570850)     </interface>
	I0520 10:38:12.298036   22116 main.go:141] libmachine: (ha-570850)     <serial type='pty'>
	I0520 10:38:12.298043   22116 main.go:141] libmachine: (ha-570850)       <target port='0'/>
	I0520 10:38:12.298053   22116 main.go:141] libmachine: (ha-570850)     </serial>
	I0520 10:38:12.298061   22116 main.go:141] libmachine: (ha-570850)     <console type='pty'>
	I0520 10:38:12.298071   22116 main.go:141] libmachine: (ha-570850)       <target type='serial' port='0'/>
	I0520 10:38:12.298082   22116 main.go:141] libmachine: (ha-570850)     </console>
	I0520 10:38:12.298089   22116 main.go:141] libmachine: (ha-570850)     <rng model='virtio'>
	I0520 10:38:12.298101   22116 main.go:141] libmachine: (ha-570850)       <backend model='random'>/dev/random</backend>
	I0520 10:38:12.298107   22116 main.go:141] libmachine: (ha-570850)     </rng>
	I0520 10:38:12.298115   22116 main.go:141] libmachine: (ha-570850)     
	I0520 10:38:12.298139   22116 main.go:141] libmachine: (ha-570850)     
	I0520 10:38:12.298151   22116 main.go:141] libmachine: (ha-570850)   </devices>
	I0520 10:38:12.298160   22116 main.go:141] libmachine: (ha-570850) </domain>
	I0520 10:38:12.298169   22116 main.go:141] libmachine: (ha-570850) 
	I0520 10:38:12.301990   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:c6:ad:99 in network default
	I0520 10:38:12.302507   22116 main.go:141] libmachine: (ha-570850) Ensuring networks are active...
	I0520 10:38:12.302538   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:12.303079   22116 main.go:141] libmachine: (ha-570850) Ensuring network default is active
	I0520 10:38:12.303345   22116 main.go:141] libmachine: (ha-570850) Ensuring network mk-ha-570850 is active
	I0520 10:38:12.303779   22116 main.go:141] libmachine: (ha-570850) Getting domain xml...
	I0520 10:38:12.304404   22116 main.go:141] libmachine: (ha-570850) Creating domain...
	I0520 10:38:13.466125   22116 main.go:141] libmachine: (ha-570850) Waiting to get IP...
	I0520 10:38:13.467034   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:13.467385   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:13.467456   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:13.467388   22138 retry.go:31] will retry after 268.975125ms: waiting for machine to come up
	I0520 10:38:13.737798   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:13.738291   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:13.738322   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:13.738178   22138 retry.go:31] will retry after 298.444753ms: waiting for machine to come up
	I0520 10:38:14.038705   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:14.039099   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:14.039130   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:14.039032   22138 retry.go:31] will retry after 456.517682ms: waiting for machine to come up
	I0520 10:38:14.497606   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:14.497927   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:14.497957   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:14.497894   22138 retry.go:31] will retry after 541.99406ms: waiting for machine to come up
	I0520 10:38:15.041607   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:15.042132   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:15.042159   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:15.042073   22138 retry.go:31] will retry after 616.954895ms: waiting for machine to come up
	I0520 10:38:15.660817   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:15.661199   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:15.661230   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:15.661143   22138 retry.go:31] will retry after 627.522657ms: waiting for machine to come up
	I0520 10:38:16.289885   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:16.290316   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:16.290348   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:16.290263   22138 retry.go:31] will retry after 1.116025556s: waiting for machine to come up
	I0520 10:38:17.408172   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:17.408594   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:17.408624   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:17.408554   22138 retry.go:31] will retry after 1.070059394s: waiting for machine to come up
	I0520 10:38:18.480688   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:18.481013   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:18.481075   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:18.481003   22138 retry.go:31] will retry after 1.30907699s: waiting for machine to come up
	I0520 10:38:19.792510   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:19.792901   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:19.792935   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:19.792838   22138 retry.go:31] will retry after 2.051452481s: waiting for machine to come up
	I0520 10:38:21.846242   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:21.846727   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:21.846756   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:21.846669   22138 retry.go:31] will retry after 2.633421547s: waiting for machine to come up
	I0520 10:38:24.482945   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:24.483298   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:24.483325   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:24.483261   22138 retry.go:31] will retry after 2.992536845s: waiting for machine to come up
	I0520 10:38:27.476848   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:27.477274   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:27.477294   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:27.477229   22138 retry.go:31] will retry after 3.075289169s: waiting for machine to come up
	I0520 10:38:30.556330   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:30.556672   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:30.556695   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:30.556626   22138 retry.go:31] will retry after 3.648633436s: waiting for machine to come up
	I0520 10:38:34.207997   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.208427   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has current primary IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.208458   22116 main.go:141] libmachine: (ha-570850) Found IP for machine: 192.168.39.152
	I0520 10:38:34.208472   22116 main.go:141] libmachine: (ha-570850) Reserving static IP address...
	I0520 10:38:34.208767   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find host DHCP lease matching {name: "ha-570850", mac: "52:54:00:8e:5a:58", ip: "192.168.39.152"} in network mk-ha-570850
	I0520 10:38:34.278371   22116 main.go:141] libmachine: (ha-570850) DBG | Getting to WaitForSSH function...
	I0520 10:38:34.278403   22116 main.go:141] libmachine: (ha-570850) Reserved static IP address: 192.168.39.152
	I0520 10:38:34.278436   22116 main.go:141] libmachine: (ha-570850) Waiting for SSH to be available...
	I0520 10:38:34.280972   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.281413   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.281441   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.281561   22116 main.go:141] libmachine: (ha-570850) DBG | Using SSH client type: external
	I0520 10:38:34.281582   22116 main.go:141] libmachine: (ha-570850) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa (-rw-------)
	I0520 10:38:34.281598   22116 main.go:141] libmachine: (ha-570850) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 10:38:34.281604   22116 main.go:141] libmachine: (ha-570850) DBG | About to run SSH command:
	I0520 10:38:34.281613   22116 main.go:141] libmachine: (ha-570850) DBG | exit 0
	I0520 10:38:34.404737   22116 main.go:141] libmachine: (ha-570850) DBG | SSH cmd err, output: <nil>: 
	I0520 10:38:34.405061   22116 main.go:141] libmachine: (ha-570850) KVM machine creation complete!
	I0520 10:38:34.405392   22116 main.go:141] libmachine: (ha-570850) Calling .GetConfigRaw
	I0520 10:38:34.405928   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:34.406118   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:34.406280   22116 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 10:38:34.406292   22116 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:38:34.407540   22116 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 10:38:34.407553   22116 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 10:38:34.407558   22116 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 10:38:34.407564   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:34.410258   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.410611   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.410625   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.410794   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:34.410968   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.411111   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.411239   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:34.411390   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:38:34.411597   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:38:34.411608   22116 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 10:38:34.516209   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:38:34.516239   22116 main.go:141] libmachine: Detecting the provisioner...
	I0520 10:38:34.516250   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:34.519042   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.519373   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.519398   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.519530   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:34.519726   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.519891   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.520024   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:34.520172   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:38:34.520328   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:38:34.520337   22116 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 10:38:34.625625   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 10:38:34.625697   22116 main.go:141] libmachine: found compatible host: buildroot
	I0520 10:38:34.625709   22116 main.go:141] libmachine: Provisioning with buildroot...
	I0520 10:38:34.625717   22116 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:38:34.625957   22116 buildroot.go:166] provisioning hostname "ha-570850"
	I0520 10:38:34.625987   22116 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:38:34.626156   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:34.628692   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.629065   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.629090   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.629260   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:34.629450   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.629603   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.629746   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:34.629957   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:38:34.630188   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:38:34.630205   22116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-570850 && echo "ha-570850" | sudo tee /etc/hostname
	I0520 10:38:34.751379   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850
	
	I0520 10:38:34.751404   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:34.754140   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.754450   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.754482   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.754653   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:34.754877   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.755018   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.755170   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:34.755331   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:38:34.755511   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:38:34.755527   22116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-570850' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-570850/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-570850' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:38:34.870336   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:38:34.870362   22116 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 10:38:34.870406   22116 buildroot.go:174] setting up certificates
	I0520 10:38:34.870417   22116 provision.go:84] configureAuth start
	I0520 10:38:34.870433   22116 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:38:34.870673   22116 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:38:34.873186   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.873425   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.873449   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.873597   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:34.875678   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.875991   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.876013   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.876166   22116 provision.go:143] copyHostCerts
	I0520 10:38:34.876193   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:38:34.876226   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 10:38:34.876238   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:38:34.876320   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 10:38:34.876441   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:38:34.876474   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 10:38:34.876483   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:38:34.876526   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 10:38:34.876578   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:38:34.876601   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 10:38:34.876610   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:38:34.876642   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 10:38:34.876706   22116 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.ha-570850 san=[127.0.0.1 192.168.39.152 ha-570850 localhost minikube]
	I0520 10:38:34.996437   22116 provision.go:177] copyRemoteCerts
	I0520 10:38:34.996488   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:38:34.996509   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:34.999105   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.999426   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.999452   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.999600   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:34.999799   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.999965   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:35.000112   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:38:35.085846   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 10:38:35.085903   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:38:35.112118   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 10:38:35.112204   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0520 10:38:35.137660   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 10:38:35.137711   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 10:38:35.162846   22116 provision.go:87] duration metric: took 292.413892ms to configureAuth
	I0520 10:38:35.162874   22116 buildroot.go:189] setting minikube options for container-runtime
	I0520 10:38:35.163047   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:38:35.163177   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:35.165712   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.165972   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.166006   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.166163   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:35.166345   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.166513   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.166625   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:35.166801   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:38:35.167026   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:38:35.167051   22116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:38:35.458789   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:38:35.458816   22116 main.go:141] libmachine: Checking connection to Docker...
	I0520 10:38:35.458833   22116 main.go:141] libmachine: (ha-570850) Calling .GetURL
	I0520 10:38:35.460116   22116 main.go:141] libmachine: (ha-570850) DBG | Using libvirt version 6000000
	I0520 10:38:35.462456   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.462751   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.462782   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.462899   22116 main.go:141] libmachine: Docker is up and running!
	I0520 10:38:35.462911   22116 main.go:141] libmachine: Reticulating splines...
	I0520 10:38:35.462917   22116 client.go:171] duration metric: took 23.647277559s to LocalClient.Create
	I0520 10:38:35.462940   22116 start.go:167] duration metric: took 23.647347204s to libmachine.API.Create "ha-570850"
	I0520 10:38:35.462986   22116 start.go:293] postStartSetup for "ha-570850" (driver="kvm2")
	I0520 10:38:35.463002   22116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:38:35.463027   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:35.463279   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:38:35.463300   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:35.465303   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.465619   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.465639   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.465789   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:35.465970   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.466134   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:35.466242   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:38:35.547765   22116 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:38:35.552086   22116 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 10:38:35.552104   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 10:38:35.552157   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 10:38:35.552228   22116 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 10:38:35.552237   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /etc/ssl/certs/111622.pem
	I0520 10:38:35.552330   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 10:38:35.562382   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:38:35.586151   22116 start.go:296] duration metric: took 123.148208ms for postStartSetup
	I0520 10:38:35.586203   22116 main.go:141] libmachine: (ha-570850) Calling .GetConfigRaw
	I0520 10:38:35.586722   22116 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:38:35.589272   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.589606   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.589635   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.589886   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:38:35.590052   22116 start.go:128] duration metric: took 23.791050927s to createHost
	I0520 10:38:35.590073   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:35.592357   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.592668   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.592698   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.592809   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:35.593009   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.593167   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.593296   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:35.593430   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:38:35.593627   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:38:35.593687   22116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 10:38:35.697911   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716201515.672299621
	
	I0520 10:38:35.697931   22116 fix.go:216] guest clock: 1716201515.672299621
	I0520 10:38:35.697938   22116 fix.go:229] Guest: 2024-05-20 10:38:35.672299621 +0000 UTC Remote: 2024-05-20 10:38:35.590063475 +0000 UTC m=+23.895467439 (delta=82.236146ms)
	I0520 10:38:35.697961   22116 fix.go:200] guest clock delta is within tolerance: 82.236146ms
	I0520 10:38:35.697966   22116 start.go:83] releasing machines lock for "ha-570850", held for 23.899037938s
	I0520 10:38:35.697982   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:35.698245   22116 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:38:35.700732   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.701131   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.701155   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.701284   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:35.701798   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:35.701972   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:35.702046   22116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:38:35.702077   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:35.702169   22116 ssh_runner.go:195] Run: cat /version.json
	I0520 10:38:35.702187   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:35.704775   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.705033   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.705183   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.705210   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.705342   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:35.705432   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.705461   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.705524   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.705610   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:35.705700   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:35.705772   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.705884   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:38:35.705910   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:35.706065   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	W0520 10:38:35.786235   22116 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 10:38:35.786343   22116 ssh_runner.go:195] Run: systemctl --version
	I0520 10:38:35.806121   22116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:38:35.960092   22116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 10:38:35.966066   22116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 10:38:35.966125   22116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:38:35.981958   22116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 10:38:35.981984   22116 start.go:494] detecting cgroup driver to use...
	I0520 10:38:35.982072   22116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:38:35.999580   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:38:36.014620   22116 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:38:36.014683   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:38:36.029032   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:38:36.043261   22116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:38:36.170229   22116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:38:36.325354   22116 docker.go:233] disabling docker service ...
	I0520 10:38:36.325424   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:38:36.339908   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:38:36.352965   22116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:38:36.473506   22116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:38:36.593087   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:38:36.607149   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:38:36.626143   22116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:38:36.626207   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.636479   22116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:38:36.636537   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.646607   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.656447   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.666481   22116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:38:36.677165   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.687069   22116 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.703838   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.713638   22116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:38:36.722797   22116 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 10:38:36.722839   22116 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 10:38:36.736966   22116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:38:36.746694   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:38:36.860575   22116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:38:37.002954   22116 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:38:37.003017   22116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:38:37.007631   22116 start.go:562] Will wait 60s for crictl version
	I0520 10:38:37.007681   22116 ssh_runner.go:195] Run: which crictl
	I0520 10:38:37.011352   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:38:37.050359   22116 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 10:38:37.050449   22116 ssh_runner.go:195] Run: crio --version
	I0520 10:38:37.078126   22116 ssh_runner.go:195] Run: crio --version
	I0520 10:38:37.107078   22116 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 10:38:37.108468   22116 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:38:37.111104   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:37.111433   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:37.111456   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:37.111693   22116 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 10:38:37.115693   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:38:37.129069   22116 kubeadm.go:877] updating cluster {Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 10:38:37.129178   22116 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:38:37.129219   22116 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:38:37.164768   22116 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 10:38:37.164843   22116 ssh_runner.go:195] Run: which lz4
	I0520 10:38:37.168726   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 10:38:37.168820   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 10:38:37.172955   22116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 10:38:37.172984   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 10:38:38.576176   22116 crio.go:462] duration metric: took 1.407378501s to copy over tarball
	I0520 10:38:38.576242   22116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 10:38:40.680099   22116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10381902s)
	I0520 10:38:40.680142   22116 crio.go:469] duration metric: took 2.103938293s to extract the tarball
	I0520 10:38:40.680152   22116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 10:38:40.717304   22116 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:38:40.759915   22116 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:38:40.759937   22116 cache_images.go:84] Images are preloaded, skipping loading
	I0520 10:38:40.759944   22116 kubeadm.go:928] updating node { 192.168.39.152 8443 v1.30.1 crio true true} ...
	I0520 10:38:40.760047   22116 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-570850 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:38:40.760109   22116 ssh_runner.go:195] Run: crio config
	I0520 10:38:40.805487   22116 cni.go:84] Creating CNI manager for ""
	I0520 10:38:40.805506   22116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 10:38:40.805520   22116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 10:38:40.805545   22116 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-570850 NodeName:ha-570850 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 10:38:40.805697   22116 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-570850"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 10:38:40.805721   22116 kube-vip.go:115] generating kube-vip config ...
	I0520 10:38:40.805774   22116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 10:38:40.823953   22116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 10:38:40.824070   22116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0520 10:38:40.824132   22116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:38:40.834205   22116 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 10:38:40.834282   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 10:38:40.843485   22116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0520 10:38:40.859593   22116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:38:40.875643   22116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0520 10:38:40.891603   22116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0520 10:38:40.907772   22116 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 10:38:40.911554   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:38:40.923517   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:38:41.048526   22116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:38:41.065763   22116 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850 for IP: 192.168.39.152
	I0520 10:38:41.065783   22116 certs.go:194] generating shared ca certs ...
	I0520 10:38:41.065797   22116 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.065989   22116 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 10:38:41.066050   22116 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 10:38:41.066064   22116 certs.go:256] generating profile certs ...
	I0520 10:38:41.066126   22116 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key
	I0520 10:38:41.066146   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.crt with IP's: []
	I0520 10:38:41.223062   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.crt ...
	I0520 10:38:41.223091   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.crt: {Name:mk1c16122880ad6a3b17b6d429622e6c1a03b4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.223277   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key ...
	I0520 10:38:41.223292   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key: {Name:mk43f8b6ad1c636cb66aeb447dbd30dc186c0027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.223393   22116 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.6b5de75a
	I0520 10:38:41.223416   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.6b5de75a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.254]
	I0520 10:38:41.344455   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.6b5de75a ...
	I0520 10:38:41.344485   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.6b5de75a: {Name:mk7f061b81223a9de0b98a169f7fe69f29e0d9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.344656   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.6b5de75a ...
	I0520 10:38:41.344673   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.6b5de75a: {Name:mkf45a4fcfce8e7a84544d6dc139989a07e13978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.344767   22116 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.6b5de75a -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt
	I0520 10:38:41.344858   22116 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.6b5de75a -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key
	I0520 10:38:41.344951   22116 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key
	I0520 10:38:41.344986   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt with IP's: []
	I0520 10:38:41.400685   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt ...
	I0520 10:38:41.400717   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt: {Name:mk7c4fb26d0612e4af4a2f45c3a0baf1b9961239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.400880   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key ...
	I0520 10:38:41.400895   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key: {Name:mka17d7cd9dd870412246eda6fe95b4c8febc0ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.401008   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 10:38:41.401030   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 10:38:41.401046   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 10:38:41.401080   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 10:38:41.401099   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 10:38:41.401118   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 10:38:41.401145   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 10:38:41.401162   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 10:38:41.401224   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 10:38:41.401269   22116 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 10:38:41.401282   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 10:38:41.401316   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:38:41.401348   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:38:41.401385   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 10:38:41.401445   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:38:41.401483   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:38:41.401504   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem -> /usr/share/ca-certificates/11162.pem
	I0520 10:38:41.401522   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /usr/share/ca-certificates/111622.pem
	I0520 10:38:41.402073   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:38:41.427596   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:38:41.450983   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:38:41.474449   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 10:38:41.498000   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 10:38:41.520364   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 10:38:41.543405   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:38:41.566383   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:38:41.593451   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:38:41.619857   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 10:38:41.646812   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 10:38:41.671984   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 10:38:41.687982   22116 ssh_runner.go:195] Run: openssl version
	I0520 10:38:41.693764   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:38:41.703811   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:38:41.708063   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:38:41.708097   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:38:41.713619   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:38:41.723742   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 10:38:41.733921   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 10:38:41.737971   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 10:38:41.738005   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 10:38:41.743488   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 10:38:41.753553   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 10:38:41.763559   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 10:38:41.767848   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 10:38:41.767888   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 10:38:41.773219   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 10:38:41.783313   22116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:38:41.787579   22116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 10:38:41.787622   22116 kubeadm.go:391] StartCluster: {Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:38:41.787689   22116 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 10:38:41.787726   22116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 10:38:41.824726   22116 cri.go:89] found id: ""
	I0520 10:38:41.824793   22116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 10:38:41.838012   22116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 10:38:41.855437   22116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 10:38:41.870445   22116 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 10:38:41.870466   22116 kubeadm.go:156] found existing configuration files:
	
	I0520 10:38:41.870512   22116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 10:38:41.882116   22116 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 10:38:41.882175   22116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 10:38:41.894722   22116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 10:38:41.907082   22116 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 10:38:41.907127   22116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 10:38:41.915796   22116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 10:38:41.924057   22116 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 10:38:41.924108   22116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 10:38:41.932650   22116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 10:38:41.940875   22116 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 10:38:41.940922   22116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 10:38:41.949797   22116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 10:38:42.178096   22116 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 10:38:53.283971   22116 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 10:38:53.284019   22116 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 10:38:53.284100   22116 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 10:38:53.284212   22116 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 10:38:53.284326   22116 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 10:38:53.284414   22116 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 10:38:53.286145   22116 out.go:204]   - Generating certificates and keys ...
	I0520 10:38:53.286237   22116 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 10:38:53.286302   22116 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 10:38:53.286394   22116 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 10:38:53.286481   22116 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 10:38:53.286565   22116 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 10:38:53.286633   22116 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 10:38:53.286755   22116 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 10:38:53.286931   22116 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-570850 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I0520 10:38:53.287009   22116 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 10:38:53.287170   22116 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-570850 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I0520 10:38:53.287264   22116 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 10:38:53.287353   22116 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 10:38:53.287415   22116 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 10:38:53.287490   22116 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 10:38:53.287563   22116 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 10:38:53.287643   22116 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 10:38:53.287711   22116 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 10:38:53.287819   22116 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 10:38:53.287869   22116 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 10:38:53.287937   22116 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 10:38:53.287998   22116 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 10:38:53.289698   22116 out.go:204]   - Booting up control plane ...
	I0520 10:38:53.289786   22116 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 10:38:53.289861   22116 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 10:38:53.289916   22116 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 10:38:53.290002   22116 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 10:38:53.290117   22116 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 10:38:53.290188   22116 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 10:38:53.290351   22116 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 10:38:53.290449   22116 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 10:38:53.290525   22116 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002313385s
	I0520 10:38:53.290610   22116 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 10:38:53.290687   22116 kubeadm.go:309] [api-check] The API server is healthy after 5.6149036s
	I0520 10:38:53.290826   22116 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 10:38:53.291001   22116 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 10:38:53.291060   22116 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 10:38:53.291220   22116 kubeadm.go:309] [mark-control-plane] Marking the node ha-570850 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 10:38:53.291269   22116 kubeadm.go:309] [bootstrap-token] Using token: d9nkx9.0xat5xp4yeo1kymg
	I0520 10:38:53.292564   22116 out.go:204]   - Configuring RBAC rules ...
	I0520 10:38:53.292663   22116 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 10:38:53.292796   22116 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 10:38:53.292908   22116 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 10:38:53.293073   22116 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 10:38:53.293205   22116 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 10:38:53.293281   22116 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 10:38:53.293375   22116 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 10:38:53.293432   22116 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 10:38:53.293476   22116 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 10:38:53.293485   22116 kubeadm.go:309] 
	I0520 10:38:53.293551   22116 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 10:38:53.293560   22116 kubeadm.go:309] 
	I0520 10:38:53.293647   22116 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 10:38:53.293661   22116 kubeadm.go:309] 
	I0520 10:38:53.293712   22116 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 10:38:53.293789   22116 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 10:38:53.293866   22116 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 10:38:53.293880   22116 kubeadm.go:309] 
	I0520 10:38:53.293975   22116 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 10:38:53.293993   22116 kubeadm.go:309] 
	I0520 10:38:53.294030   22116 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 10:38:53.294041   22116 kubeadm.go:309] 
	I0520 10:38:53.294095   22116 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 10:38:53.294170   22116 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 10:38:53.294238   22116 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 10:38:53.294244   22116 kubeadm.go:309] 
	I0520 10:38:53.294316   22116 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 10:38:53.294403   22116 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 10:38:53.294413   22116 kubeadm.go:309] 
	I0520 10:38:53.294524   22116 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token d9nkx9.0xat5xp4yeo1kymg \
	I0520 10:38:53.294646   22116 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 10:38:53.294678   22116 kubeadm.go:309] 	--control-plane 
	I0520 10:38:53.294687   22116 kubeadm.go:309] 
	I0520 10:38:53.294784   22116 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 10:38:53.294793   22116 kubeadm.go:309] 
	I0520 10:38:53.294861   22116 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token d9nkx9.0xat5xp4yeo1kymg \
	I0520 10:38:53.294961   22116 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 10:38:53.294974   22116 cni.go:84] Creating CNI manager for ""
	I0520 10:38:53.294984   22116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 10:38:53.296515   22116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 10:38:53.297698   22116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 10:38:53.303134   22116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 10:38:53.303150   22116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 10:38:53.324827   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 10:38:53.702667   22116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 10:38:53.702813   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:53.702821   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-570850 minikube.k8s.io/updated_at=2024_05_20T10_38_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=ha-570850 minikube.k8s.io/primary=true
	I0520 10:38:53.732556   22116 ops.go:34] apiserver oom_adj: -16
	I0520 10:38:53.838256   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:54.338854   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:54.839166   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:55.338668   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:55.838915   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:56.338596   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:56.839322   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:57.338462   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:57.838671   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:58.338318   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:58.839182   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:59.339292   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:59.838564   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:00.338310   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:00.838519   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:01.338903   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:01.839101   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:02.338923   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:02.839124   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:03.339212   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:03.838389   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:04.339186   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:04.838759   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:05.339169   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:05.838272   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:05.955390   22116 kubeadm.go:1107] duration metric: took 12.252643948s to wait for elevateKubeSystemPrivileges
	W0520 10:39:05.955437   22116 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 10:39:05.955449   22116 kubeadm.go:393] duration metric: took 24.167826538s to StartCluster
	I0520 10:39:05.955469   22116 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:39:05.955559   22116 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:39:05.956482   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:39:05.956720   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 10:39:05.956731   22116 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:39:05.956755   22116 start.go:240] waiting for startup goroutines ...
	I0520 10:39:05.956770   22116 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 10:39:05.956858   22116 addons.go:69] Setting storage-provisioner=true in profile "ha-570850"
	I0520 10:39:05.956873   22116 addons.go:69] Setting default-storageclass=true in profile "ha-570850"
	I0520 10:39:05.956887   22116 addons.go:234] Setting addon storage-provisioner=true in "ha-570850"
	I0520 10:39:05.956912   22116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-570850"
	I0520 10:39:05.956922   22116 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:39:05.956981   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:39:05.957340   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:05.957364   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:05.957380   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:05.957392   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:05.972241   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I0520 10:39:05.972433   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I0520 10:39:05.972669   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:05.972906   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:05.973218   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:05.973236   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:05.973413   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:05.973433   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:05.973597   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:05.973738   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:05.973778   22116 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:39:05.974233   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:05.974277   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:05.976055   22116 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:39:05.976397   22116 kapi.go:59] client config for ha-570850: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.crt", KeyFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key", CAFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 10:39:05.976962   22116 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 10:39:05.977226   22116 addons.go:234] Setting addon default-storageclass=true in "ha-570850"
	I0520 10:39:05.977269   22116 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:39:05.977643   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:05.977688   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:05.991328   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43073
	I0520 10:39:05.991976   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:05.992066   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41693
	I0520 10:39:05.992512   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:05.992530   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:05.992568   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:05.992916   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:05.992986   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:05.993003   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:05.993100   22116 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:39:05.993409   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:05.993973   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:05.994016   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:05.994785   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:39:05.996558   22116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 10:39:05.997940   22116 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:39:05.997957   22116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 10:39:05.997974   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:39:06.000388   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:39:06.000750   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:39:06.000817   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:39:06.001075   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:39:06.001264   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:39:06.001397   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:39:06.001523   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:39:06.009670   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42821
	I0520 10:39:06.010031   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:06.010441   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:06.010462   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:06.010748   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:06.010932   22116 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:39:06.012271   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:39:06.012451   22116 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 10:39:06.012463   22116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 10:39:06.012475   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:39:06.015176   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:39:06.015460   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:39:06.015520   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:39:06.015708   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:39:06.015952   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:39:06.016110   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:39:06.016242   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:39:06.085059   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 10:39:06.143593   22116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:39:06.204380   22116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 10:39:06.614259   22116 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 10:39:06.948366   22116 main.go:141] libmachine: Making call to close driver server
	I0520 10:39:06.948394   22116 main.go:141] libmachine: (ha-570850) Calling .Close
	I0520 10:39:06.948410   22116 main.go:141] libmachine: Making call to close driver server
	I0520 10:39:06.948426   22116 main.go:141] libmachine: (ha-570850) Calling .Close
	I0520 10:39:06.948785   22116 main.go:141] libmachine: (ha-570850) DBG | Closing plugin on server side
	I0520 10:39:06.948790   22116 main.go:141] libmachine: (ha-570850) DBG | Closing plugin on server side
	I0520 10:39:06.948788   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:39:06.948808   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:39:06.948824   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:39:06.948832   22116 main.go:141] libmachine: Making call to close driver server
	I0520 10:39:06.948842   22116 main.go:141] libmachine: (ha-570850) Calling .Close
	I0520 10:39:06.948808   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:39:06.948938   22116 main.go:141] libmachine: Making call to close driver server
	I0520 10:39:06.948956   22116 main.go:141] libmachine: (ha-570850) Calling .Close
	I0520 10:39:06.949078   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:39:06.949286   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:39:06.949109   22116 main.go:141] libmachine: (ha-570850) DBG | Closing plugin on server side
	I0520 10:39:06.949133   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:39:06.949433   22116 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 10:39:06.949446   22116 round_trippers.go:469] Request Headers:
	I0520 10:39:06.949457   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:39:06.949464   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:39:06.949437   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:39:06.949204   22116 main.go:141] libmachine: (ha-570850) DBG | Closing plugin on server side
	I0520 10:39:06.969710   22116 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 10:39:06.970478   22116 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 10:39:06.970501   22116 round_trippers.go:469] Request Headers:
	I0520 10:39:06.970512   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:39:06.970518   22116 round_trippers.go:473]     Content-Type: application/json
	I0520 10:39:06.970524   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:39:06.973399   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:39:06.973750   22116 main.go:141] libmachine: Making call to close driver server
	I0520 10:39:06.973765   22116 main.go:141] libmachine: (ha-570850) Calling .Close
	I0520 10:39:06.974108   22116 main.go:141] libmachine: (ha-570850) DBG | Closing plugin on server side
	I0520 10:39:06.974127   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:39:06.974164   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:39:06.976462   22116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 10:39:06.978014   22116 addons.go:505] duration metric: took 1.021245908s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 10:39:06.978070   22116 start.go:245] waiting for cluster config update ...
	I0520 10:39:06.978087   22116 start.go:254] writing updated cluster config ...
	I0520 10:39:06.979981   22116 out.go:177] 
	I0520 10:39:06.982195   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:39:06.982294   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:39:06.984294   22116 out.go:177] * Starting "ha-570850-m02" control-plane node in "ha-570850" cluster
	I0520 10:39:06.985971   22116 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:39:06.985999   22116 cache.go:56] Caching tarball of preloaded images
	I0520 10:39:06.986140   22116 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 10:39:06.986157   22116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:39:06.986253   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:39:06.986480   22116 start.go:360] acquireMachinesLock for ha-570850-m02: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 10:39:06.986544   22116 start.go:364] duration metric: took 37.407µs to acquireMachinesLock for "ha-570850-m02"
	I0520 10:39:06.986567   22116 start.go:93] Provisioning new machine with config: &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:39:06.986673   22116 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0520 10:39:06.988380   22116 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 10:39:06.988492   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:06.988534   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:07.003535   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0520 10:39:07.004005   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:07.004467   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:07.004487   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:07.004753   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:07.004960   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetMachineName
	I0520 10:39:07.005122   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:07.005278   22116 start.go:159] libmachine.API.Create for "ha-570850" (driver="kvm2")
	I0520 10:39:07.005307   22116 client.go:168] LocalClient.Create starting
	I0520 10:39:07.005342   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 10:39:07.005379   22116 main.go:141] libmachine: Decoding PEM data...
	I0520 10:39:07.005396   22116 main.go:141] libmachine: Parsing certificate...
	I0520 10:39:07.005457   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 10:39:07.005480   22116 main.go:141] libmachine: Decoding PEM data...
	I0520 10:39:07.005496   22116 main.go:141] libmachine: Parsing certificate...
	I0520 10:39:07.005535   22116 main.go:141] libmachine: Running pre-create checks...
	I0520 10:39:07.005548   22116 main.go:141] libmachine: (ha-570850-m02) Calling .PreCreateCheck
	I0520 10:39:07.005706   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetConfigRaw
	I0520 10:39:07.006043   22116 main.go:141] libmachine: Creating machine...
	I0520 10:39:07.006058   22116 main.go:141] libmachine: (ha-570850-m02) Calling .Create
	I0520 10:39:07.006170   22116 main.go:141] libmachine: (ha-570850-m02) Creating KVM machine...
	I0520 10:39:07.007358   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found existing default KVM network
	I0520 10:39:07.007519   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found existing private KVM network mk-ha-570850
	I0520 10:39:07.007605   22116 main.go:141] libmachine: (ha-570850-m02) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02 ...
	I0520 10:39:07.007639   22116 main.go:141] libmachine: (ha-570850-m02) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 10:39:07.007695   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:07.007608   22518 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:39:07.007788   22116 main.go:141] libmachine: (ha-570850-m02) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 10:39:07.222352   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:07.222226   22518 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa...
	I0520 10:39:07.538827   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:07.538683   22518 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/ha-570850-m02.rawdisk...
	I0520 10:39:07.538865   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Writing magic tar header
	I0520 10:39:07.538881   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Writing SSH key tar header
	I0520 10:39:07.538893   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:07.538838   22518 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02 ...
	I0520 10:39:07.539023   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02
	I0520 10:39:07.539049   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 10:39:07.539063   22116 main.go:141] libmachine: (ha-570850-m02) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02 (perms=drwx------)
	I0520 10:39:07.539079   22116 main.go:141] libmachine: (ha-570850-m02) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 10:39:07.539088   22116 main.go:141] libmachine: (ha-570850-m02) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 10:39:07.539098   22116 main.go:141] libmachine: (ha-570850-m02) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 10:39:07.539106   22116 main.go:141] libmachine: (ha-570850-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 10:39:07.539122   22116 main.go:141] libmachine: (ha-570850-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 10:39:07.539141   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:39:07.539152   22116 main.go:141] libmachine: (ha-570850-m02) Creating domain...
	I0520 10:39:07.539167   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 10:39:07.539175   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 10:39:07.539198   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home/jenkins
	I0520 10:39:07.539206   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home
	I0520 10:39:07.539212   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Skipping /home - not owner
	I0520 10:39:07.540358   22116 main.go:141] libmachine: (ha-570850-m02) define libvirt domain using xml: 
	I0520 10:39:07.540381   22116 main.go:141] libmachine: (ha-570850-m02) <domain type='kvm'>
	I0520 10:39:07.540390   22116 main.go:141] libmachine: (ha-570850-m02)   <name>ha-570850-m02</name>
	I0520 10:39:07.540399   22116 main.go:141] libmachine: (ha-570850-m02)   <memory unit='MiB'>2200</memory>
	I0520 10:39:07.540405   22116 main.go:141] libmachine: (ha-570850-m02)   <vcpu>2</vcpu>
	I0520 10:39:07.540415   22116 main.go:141] libmachine: (ha-570850-m02)   <features>
	I0520 10:39:07.540420   22116 main.go:141] libmachine: (ha-570850-m02)     <acpi/>
	I0520 10:39:07.540430   22116 main.go:141] libmachine: (ha-570850-m02)     <apic/>
	I0520 10:39:07.540434   22116 main.go:141] libmachine: (ha-570850-m02)     <pae/>
	I0520 10:39:07.540442   22116 main.go:141] libmachine: (ha-570850-m02)     
	I0520 10:39:07.540472   22116 main.go:141] libmachine: (ha-570850-m02)   </features>
	I0520 10:39:07.540495   22116 main.go:141] libmachine: (ha-570850-m02)   <cpu mode='host-passthrough'>
	I0520 10:39:07.540509   22116 main.go:141] libmachine: (ha-570850-m02)   
	I0520 10:39:07.540519   22116 main.go:141] libmachine: (ha-570850-m02)   </cpu>
	I0520 10:39:07.540533   22116 main.go:141] libmachine: (ha-570850-m02)   <os>
	I0520 10:39:07.540540   22116 main.go:141] libmachine: (ha-570850-m02)     <type>hvm</type>
	I0520 10:39:07.540550   22116 main.go:141] libmachine: (ha-570850-m02)     <boot dev='cdrom'/>
	I0520 10:39:07.540557   22116 main.go:141] libmachine: (ha-570850-m02)     <boot dev='hd'/>
	I0520 10:39:07.540578   22116 main.go:141] libmachine: (ha-570850-m02)     <bootmenu enable='no'/>
	I0520 10:39:07.540597   22116 main.go:141] libmachine: (ha-570850-m02)   </os>
	I0520 10:39:07.540607   22116 main.go:141] libmachine: (ha-570850-m02)   <devices>
	I0520 10:39:07.540621   22116 main.go:141] libmachine: (ha-570850-m02)     <disk type='file' device='cdrom'>
	I0520 10:39:07.540634   22116 main.go:141] libmachine: (ha-570850-m02)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/boot2docker.iso'/>
	I0520 10:39:07.540641   22116 main.go:141] libmachine: (ha-570850-m02)       <target dev='hdc' bus='scsi'/>
	I0520 10:39:07.540646   22116 main.go:141] libmachine: (ha-570850-m02)       <readonly/>
	I0520 10:39:07.540653   22116 main.go:141] libmachine: (ha-570850-m02)     </disk>
	I0520 10:39:07.540659   22116 main.go:141] libmachine: (ha-570850-m02)     <disk type='file' device='disk'>
	I0520 10:39:07.540667   22116 main.go:141] libmachine: (ha-570850-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 10:39:07.540676   22116 main.go:141] libmachine: (ha-570850-m02)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/ha-570850-m02.rawdisk'/>
	I0520 10:39:07.540683   22116 main.go:141] libmachine: (ha-570850-m02)       <target dev='hda' bus='virtio'/>
	I0520 10:39:07.540689   22116 main.go:141] libmachine: (ha-570850-m02)     </disk>
	I0520 10:39:07.540694   22116 main.go:141] libmachine: (ha-570850-m02)     <interface type='network'>
	I0520 10:39:07.540720   22116 main.go:141] libmachine: (ha-570850-m02)       <source network='mk-ha-570850'/>
	I0520 10:39:07.540766   22116 main.go:141] libmachine: (ha-570850-m02)       <model type='virtio'/>
	I0520 10:39:07.540780   22116 main.go:141] libmachine: (ha-570850-m02)     </interface>
	I0520 10:39:07.540787   22116 main.go:141] libmachine: (ha-570850-m02)     <interface type='network'>
	I0520 10:39:07.540800   22116 main.go:141] libmachine: (ha-570850-m02)       <source network='default'/>
	I0520 10:39:07.540808   22116 main.go:141] libmachine: (ha-570850-m02)       <model type='virtio'/>
	I0520 10:39:07.540819   22116 main.go:141] libmachine: (ha-570850-m02)     </interface>
	I0520 10:39:07.540829   22116 main.go:141] libmachine: (ha-570850-m02)     <serial type='pty'>
	I0520 10:39:07.540839   22116 main.go:141] libmachine: (ha-570850-m02)       <target port='0'/>
	I0520 10:39:07.540851   22116 main.go:141] libmachine: (ha-570850-m02)     </serial>
	I0520 10:39:07.540861   22116 main.go:141] libmachine: (ha-570850-m02)     <console type='pty'>
	I0520 10:39:07.540877   22116 main.go:141] libmachine: (ha-570850-m02)       <target type='serial' port='0'/>
	I0520 10:39:07.540888   22116 main.go:141] libmachine: (ha-570850-m02)     </console>
	I0520 10:39:07.540897   22116 main.go:141] libmachine: (ha-570850-m02)     <rng model='virtio'>
	I0520 10:39:07.540908   22116 main.go:141] libmachine: (ha-570850-m02)       <backend model='random'>/dev/random</backend>
	I0520 10:39:07.540918   22116 main.go:141] libmachine: (ha-570850-m02)     </rng>
	I0520 10:39:07.540945   22116 main.go:141] libmachine: (ha-570850-m02)     
	I0520 10:39:07.540954   22116 main.go:141] libmachine: (ha-570850-m02)     
	I0520 10:39:07.540994   22116 main.go:141] libmachine: (ha-570850-m02)   </devices>
	I0520 10:39:07.541017   22116 main.go:141] libmachine: (ha-570850-m02) </domain>
	I0520 10:39:07.541045   22116 main.go:141] libmachine: (ha-570850-m02) 
	I0520 10:39:07.548277   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:85:10:a4 in network default
	I0520 10:39:07.548893   22116 main.go:141] libmachine: (ha-570850-m02) Ensuring networks are active...
	I0520 10:39:07.548911   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:07.549679   22116 main.go:141] libmachine: (ha-570850-m02) Ensuring network default is active
	I0520 10:39:07.550054   22116 main.go:141] libmachine: (ha-570850-m02) Ensuring network mk-ha-570850 is active
	I0520 10:39:07.550404   22116 main.go:141] libmachine: (ha-570850-m02) Getting domain xml...
	I0520 10:39:07.551042   22116 main.go:141] libmachine: (ha-570850-m02) Creating domain...
	I0520 10:39:08.760553   22116 main.go:141] libmachine: (ha-570850-m02) Waiting to get IP...
	I0520 10:39:08.761403   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:08.761875   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:08.761928   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:08.761845   22518 retry.go:31] will retry after 308.455948ms: waiting for machine to come up
	I0520 10:39:09.072591   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:09.073264   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:09.073293   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:09.073213   22518 retry.go:31] will retry after 383.205669ms: waiting for machine to come up
	I0520 10:39:09.458510   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:09.459020   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:09.459068   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:09.458988   22518 retry.go:31] will retry after 396.48102ms: waiting for machine to come up
	I0520 10:39:09.856507   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:09.856981   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:09.857008   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:09.856941   22518 retry.go:31] will retry after 553.948365ms: waiting for machine to come up
	I0520 10:39:10.412589   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:10.413029   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:10.413073   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:10.413002   22518 retry.go:31] will retry after 526.539632ms: waiting for machine to come up
	I0520 10:39:10.940743   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:10.941226   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:10.941254   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:10.941165   22518 retry.go:31] will retry after 794.865404ms: waiting for machine to come up
	I0520 10:39:11.738030   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:11.738544   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:11.738570   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:11.738477   22518 retry.go:31] will retry after 1.0140341s: waiting for machine to come up
	I0520 10:39:12.753637   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:12.754035   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:12.754095   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:12.754026   22518 retry.go:31] will retry after 1.031388488s: waiting for machine to come up
	I0520 10:39:13.787382   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:13.787765   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:13.787803   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:13.787732   22518 retry.go:31] will retry after 1.73196699s: waiting for machine to come up
	I0520 10:39:15.521538   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:15.521914   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:15.521936   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:15.521886   22518 retry.go:31] will retry after 2.161679329s: waiting for machine to come up
	I0520 10:39:17.685849   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:17.686404   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:17.686432   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:17.686344   22518 retry.go:31] will retry after 2.721641433s: waiting for machine to come up
	I0520 10:39:20.411165   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:20.411496   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:20.411522   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:20.411468   22518 retry.go:31] will retry after 2.809218862s: waiting for machine to come up
	I0520 10:39:23.222165   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:23.222465   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:23.222500   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:23.222424   22518 retry.go:31] will retry after 4.068354924s: waiting for machine to come up
	I0520 10:39:27.292423   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:27.292962   22116 main.go:141] libmachine: (ha-570850-m02) Found IP for machine: 192.168.39.111
	I0520 10:39:27.292991   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:27.293001   22116 main.go:141] libmachine: (ha-570850-m02) Reserving static IP address...
	I0520 10:39:27.293329   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find host DHCP lease matching {name: "ha-570850-m02", mac: "52:54:00:02:04:25", ip: "192.168.39.111"} in network mk-ha-570850
	I0520 10:39:27.362808   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Getting to WaitForSSH function...
	I0520 10:39:27.362838   22116 main.go:141] libmachine: (ha-570850-m02) Reserved static IP address: 192.168.39.111
	I0520 10:39:27.362851   22116 main.go:141] libmachine: (ha-570850-m02) Waiting for SSH to be available...
	I0520 10:39:27.365524   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:27.365868   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850
	I0520 10:39:27.365912   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find defined IP address of network mk-ha-570850 interface with MAC address 52:54:00:02:04:25
	I0520 10:39:27.366107   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Using SSH client type: external
	I0520 10:39:27.366126   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa (-rw-------)
	I0520 10:39:27.366174   22116 main.go:141] libmachine: (ha-570850-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 10:39:27.366192   22116 main.go:141] libmachine: (ha-570850-m02) DBG | About to run SSH command:
	I0520 10:39:27.366204   22116 main.go:141] libmachine: (ha-570850-m02) DBG | exit 0
	I0520 10:39:27.369640   22116 main.go:141] libmachine: (ha-570850-m02) DBG | SSH cmd err, output: exit status 255: 
	I0520 10:39:27.369661   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0520 10:39:27.369670   22116 main.go:141] libmachine: (ha-570850-m02) DBG | command : exit 0
	I0520 10:39:27.369678   22116 main.go:141] libmachine: (ha-570850-m02) DBG | err     : exit status 255
	I0520 10:39:27.369689   22116 main.go:141] libmachine: (ha-570850-m02) DBG | output  : 
	I0520 10:39:30.371855   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Getting to WaitForSSH function...
	I0520 10:39:30.374273   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.374696   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.374725   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.374812   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Using SSH client type: external
	I0520 10:39:30.374835   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa (-rw-------)
	I0520 10:39:30.374866   22116 main.go:141] libmachine: (ha-570850-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 10:39:30.374888   22116 main.go:141] libmachine: (ha-570850-m02) DBG | About to run SSH command:
	I0520 10:39:30.374896   22116 main.go:141] libmachine: (ha-570850-m02) DBG | exit 0
	I0520 10:39:30.501001   22116 main.go:141] libmachine: (ha-570850-m02) DBG | SSH cmd err, output: <nil>: 
	I0520 10:39:30.501285   22116 main.go:141] libmachine: (ha-570850-m02) KVM machine creation complete!
	I0520 10:39:30.501636   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetConfigRaw
	I0520 10:39:30.502176   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:30.502377   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:30.502502   22116 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 10:39:30.502518   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:39:30.503643   22116 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 10:39:30.503660   22116 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 10:39:30.503667   22116 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 10:39:30.503675   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:30.505760   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.506063   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.506087   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.506220   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:30.506410   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.506565   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.506746   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:30.506928   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:39:30.507140   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0520 10:39:30.507151   22116 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 10:39:30.612174   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:39:30.612194   22116 main.go:141] libmachine: Detecting the provisioner...
	I0520 10:39:30.612201   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:30.614747   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.615126   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.615152   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.615284   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:30.615490   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.615661   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.615802   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:30.615986   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:39:30.616154   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0520 10:39:30.616164   22116 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 10:39:30.725354   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 10:39:30.725456   22116 main.go:141] libmachine: found compatible host: buildroot
	I0520 10:39:30.725468   22116 main.go:141] libmachine: Provisioning with buildroot...
	I0520 10:39:30.725479   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetMachineName
	I0520 10:39:30.725703   22116 buildroot.go:166] provisioning hostname "ha-570850-m02"
	I0520 10:39:30.725724   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetMachineName
	I0520 10:39:30.725936   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:30.728383   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.728766   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.728790   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.728967   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:30.729133   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.729289   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.729424   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:30.729594   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:39:30.729763   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0520 10:39:30.729784   22116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-570850-m02 && echo "ha-570850-m02" | sudo tee /etc/hostname
	I0520 10:39:30.851186   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850-m02
	
	I0520 10:39:30.851214   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:30.853747   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.854016   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.854040   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.854209   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:30.854381   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.854497   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.854615   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:30.854749   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:39:30.854960   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0520 10:39:30.854984   22116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-570850-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-570850-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-570850-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:39:30.969623   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:39:30.969651   22116 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 10:39:30.969686   22116 buildroot.go:174] setting up certificates
	I0520 10:39:30.969698   22116 provision.go:84] configureAuth start
	I0520 10:39:30.969712   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetMachineName
	I0520 10:39:30.970030   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:39:30.972733   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.973127   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.973151   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.973285   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:30.975420   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.975834   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.975855   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.975975   22116 provision.go:143] copyHostCerts
	I0520 10:39:30.976007   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:39:30.976039   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 10:39:30.976061   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:39:30.976128   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 10:39:30.976205   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:39:30.976221   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 10:39:30.976228   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:39:30.976251   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 10:39:30.976301   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:39:30.976317   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 10:39:30.976323   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:39:30.976342   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 10:39:30.976398   22116 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.ha-570850-m02 san=[127.0.0.1 192.168.39.111 ha-570850-m02 localhost minikube]
	I0520 10:39:31.130245   22116 provision.go:177] copyRemoteCerts
	I0520 10:39:31.130300   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:39:31.130323   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:31.133078   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.133415   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.133441   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.133607   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:31.133919   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.134080   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:31.134226   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	I0520 10:39:31.223915   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 10:39:31.224018   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:39:31.247409   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 10:39:31.247466   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 10:39:31.272356   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 10:39:31.272416   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 10:39:31.297579   22116 provision.go:87] duration metric: took 327.868675ms to configureAuth
	I0520 10:39:31.297608   22116 buildroot.go:189] setting minikube options for container-runtime
	I0520 10:39:31.297817   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:39:31.297914   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:31.300270   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.300976   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.301021   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.301281   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:31.301663   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.301943   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.302207   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:31.302343   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:39:31.302550   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0520 10:39:31.302572   22116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:39:31.596508   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:39:31.596529   22116 main.go:141] libmachine: Checking connection to Docker...
	I0520 10:39:31.596536   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetURL
	I0520 10:39:31.597741   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Using libvirt version 6000000
	I0520 10:39:31.599832   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.600219   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.600249   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.600414   22116 main.go:141] libmachine: Docker is up and running!
	I0520 10:39:31.600433   22116 main.go:141] libmachine: Reticulating splines...
	I0520 10:39:31.600439   22116 client.go:171] duration metric: took 24.595123041s to LocalClient.Create
	I0520 10:39:31.600459   22116 start.go:167] duration metric: took 24.5951841s to libmachine.API.Create "ha-570850"
	I0520 10:39:31.600468   22116 start.go:293] postStartSetup for "ha-570850-m02" (driver="kvm2")
	I0520 10:39:31.600478   22116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:39:31.600503   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:31.600715   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:39:31.600735   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:31.602815   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.603128   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.603153   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.603306   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:31.603461   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.603577   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:31.603730   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	I0520 10:39:31.689013   22116 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:39:31.693322   22116 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 10:39:31.693338   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 10:39:31.693397   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 10:39:31.693462   22116 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 10:39:31.693471   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /etc/ssl/certs/111622.pem
	I0520 10:39:31.693547   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 10:39:31.702965   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:39:31.726399   22116 start.go:296] duration metric: took 125.918505ms for postStartSetup
	I0520 10:39:31.726442   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetConfigRaw
	I0520 10:39:31.726954   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:39:31.729475   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.729762   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.729790   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.730014   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:39:31.730185   22116 start.go:128] duration metric: took 24.743500689s to createHost
	I0520 10:39:31.730205   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:31.732251   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.732574   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.732602   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.732737   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:31.732949   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.733109   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.733235   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:31.733376   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:39:31.733520   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0520 10:39:31.733530   22116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 10:39:31.841774   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716201571.818898623
	
	I0520 10:39:31.841796   22116 fix.go:216] guest clock: 1716201571.818898623
	I0520 10:39:31.841803   22116 fix.go:229] Guest: 2024-05-20 10:39:31.818898623 +0000 UTC Remote: 2024-05-20 10:39:31.730196568 +0000 UTC m=+80.035600532 (delta=88.702055ms)
	I0520 10:39:31.841817   22116 fix.go:200] guest clock delta is within tolerance: 88.702055ms
	I0520 10:39:31.841823   22116 start.go:83] releasing machines lock for "ha-570850-m02", held for 24.855267636s
	I0520 10:39:31.841840   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:31.842065   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:39:31.844853   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.845223   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.845250   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.846987   22116 out.go:177] * Found network options:
	I0520 10:39:31.848419   22116 out.go:177]   - NO_PROXY=192.168.39.152
	W0520 10:39:31.849635   22116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 10:39:31.849662   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:31.850170   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:31.850378   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:31.850494   22116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:39:31.850549   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	W0520 10:39:31.850580   22116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 10:39:31.850650   22116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:39:31.850670   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:31.853241   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.853510   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.853590   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.853614   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.853773   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:31.853879   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.853906   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.853931   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.854034   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:31.854107   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:31.854178   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.854231   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	I0520 10:39:31.854305   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:31.854407   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	I0520 10:39:32.092014   22116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 10:39:32.098694   22116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 10:39:32.098759   22116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:39:32.114429   22116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 10:39:32.114450   22116 start.go:494] detecting cgroup driver to use...
	I0520 10:39:32.114506   22116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:39:32.130116   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:39:32.143233   22116 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:39:32.143286   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:39:32.156247   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:39:32.169208   22116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:39:32.281948   22116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:39:32.447374   22116 docker.go:233] disabling docker service ...
	I0520 10:39:32.447452   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:39:32.462556   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:39:32.475137   22116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:39:32.596131   22116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:39:32.718787   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:39:32.732595   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:39:32.750542   22116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:39:32.750619   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.761208   22116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:39:32.761282   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.771691   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.781850   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.792064   22116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:39:32.803228   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.813962   22116 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.833755   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.844816   22116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:39:32.854759   22116 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 10:39:32.854814   22116 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 10:39:32.868132   22116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:39:32.877630   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:39:32.997160   22116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:39:33.132143   22116 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:39:33.132205   22116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:39:33.136711   22116 start.go:562] Will wait 60s for crictl version
	I0520 10:39:33.136761   22116 ssh_runner.go:195] Run: which crictl
	I0520 10:39:33.140321   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:39:33.181132   22116 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 10:39:33.181218   22116 ssh_runner.go:195] Run: crio --version
	I0520 10:39:33.209052   22116 ssh_runner.go:195] Run: crio --version
	I0520 10:39:33.237968   22116 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 10:39:33.239273   22116 out.go:177]   - env NO_PROXY=192.168.39.152
	I0520 10:39:33.240457   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:39:33.243151   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:33.243477   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:33.243497   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:33.243689   22116 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 10:39:33.247683   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:39:33.260509   22116 mustload.go:65] Loading cluster: ha-570850
	I0520 10:39:33.260696   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:39:33.260986   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:33.261022   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:33.275277   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39745
	I0520 10:39:33.275611   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:33.276078   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:33.276095   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:33.276389   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:33.276579   22116 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:39:33.278034   22116 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:39:33.278300   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:33.278322   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:33.292610   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I0520 10:39:33.292988   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:33.293370   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:33.293390   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:33.293720   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:33.293891   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:39:33.294042   22116 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850 for IP: 192.168.39.111
	I0520 10:39:33.294053   22116 certs.go:194] generating shared ca certs ...
	I0520 10:39:33.294065   22116 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:39:33.294234   22116 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 10:39:33.294299   22116 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 10:39:33.294313   22116 certs.go:256] generating profile certs ...
	I0520 10:39:33.294404   22116 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key
	I0520 10:39:33.294434   22116 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.5385822e
	I0520 10:39:33.294454   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.5385822e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.111 192.168.39.254]
	I0520 10:39:33.433702   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.5385822e ...
	I0520 10:39:33.433727   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.5385822e: {Name:mk50e0176d866330008c6bf0741466d79579f158 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:39:33.433869   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.5385822e ...
	I0520 10:39:33.433882   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.5385822e: {Name:mkf694731ad9752cd8224b47057675ee2ef4c0f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:39:33.433955   22116 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.5385822e -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt
	I0520 10:39:33.434072   22116 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.5385822e -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key
	I0520 10:39:33.434224   22116 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key
	I0520 10:39:33.434241   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 10:39:33.434257   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 10:39:33.434271   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 10:39:33.434283   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 10:39:33.434295   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 10:39:33.434307   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 10:39:33.434320   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 10:39:33.434332   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 10:39:33.434379   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 10:39:33.434407   22116 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 10:39:33.434416   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 10:39:33.434436   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:39:33.434457   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:39:33.434478   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 10:39:33.434529   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:39:33.434561   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:39:33.434575   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem -> /usr/share/ca-certificates/11162.pem
	I0520 10:39:33.434588   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /usr/share/ca-certificates/111622.pem
	I0520 10:39:33.434618   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:39:33.437512   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:39:33.437878   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:39:33.437898   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:39:33.438079   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:39:33.438231   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:39:33.438386   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:39:33.438503   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:39:33.513250   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 10:39:33.518649   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 10:39:33.530768   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 10:39:33.534972   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0520 10:39:33.545492   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 10:39:33.549523   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 10:39:33.560108   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 10:39:33.564246   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0520 10:39:33.576032   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 10:39:33.582075   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 10:39:33.592577   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 10:39:33.599181   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0520 10:39:33.610518   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:39:33.635730   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:39:33.658738   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:39:33.682051   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 10:39:33.704864   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 10:39:33.727326   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 10:39:33.750031   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:39:33.773040   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:39:33.795743   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:39:33.818902   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 10:39:33.843003   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 10:39:33.865500   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 10:39:33.883022   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0520 10:39:33.900124   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 10:39:33.917533   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0520 10:39:33.933804   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 10:39:33.950010   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0520 10:39:33.966659   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 10:39:33.983641   22116 ssh_runner.go:195] Run: openssl version
	I0520 10:39:33.989734   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:39:34.001022   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:39:34.005480   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:39:34.005531   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:39:34.011505   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:39:34.022713   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 10:39:34.033721   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 10:39:34.038023   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 10:39:34.038066   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 10:39:34.043598   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 10:39:34.054302   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 10:39:34.066192   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 10:39:34.070488   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 10:39:34.070536   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 10:39:34.075919   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 10:39:34.086630   22116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:39:34.090432   22116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 10:39:34.090484   22116 kubeadm.go:928] updating node {m02 192.168.39.111 8443 v1.30.1 crio true true} ...
	I0520 10:39:34.090577   22116 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-570850-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:39:34.090603   22116 kube-vip.go:115] generating kube-vip config ...
	I0520 10:39:34.090639   22116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 10:39:34.108817   22116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 10:39:34.108876   22116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 10:39:34.108921   22116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:39:34.118705   22116 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 10:39:34.118758   22116 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 10:39:34.128571   22116 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 10:39:34.128595   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 10:39:34.128627   22116 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0520 10:39:34.128652   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 10:39:34.128672   22116 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0520 10:39:34.134262   22116 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 10:39:34.134287   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 10:40:06.114857   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 10:40:06.114962   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 10:40:06.120508   22116 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 10:40:06.120545   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 10:40:39.954103   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:40:39.969219   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 10:40:39.969302   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 10:40:39.973800   22116 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 10:40:39.973835   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 10:40:40.362696   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 10:40:40.373412   22116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0520 10:40:40.390213   22116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:40:40.406055   22116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 10:40:40.422300   22116 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 10:40:40.426365   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:40:40.438809   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:40:40.570970   22116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:40:40.590302   22116 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:40:40.590810   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:40:40.590864   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:40:40.605443   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45899
	I0520 10:40:40.605836   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:40:40.606447   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:40:40.606474   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:40:40.606871   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:40:40.607102   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:40:40.607285   22116 start.go:316] joinCluster: &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:40:40.607386   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 10:40:40.607408   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:40:40.610720   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:40:40.611154   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:40:40.611183   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:40:40.611313   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:40:40.611479   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:40:40.611629   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:40:40.611777   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:40:40.768415   22116 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:40:40.768472   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 98k8uo.fdec3swqm00gq5a1 --discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-570850-m02 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443"
	I0520 10:41:01.785471   22116 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 98k8uo.fdec3swqm00gq5a1 --discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-570850-m02 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443": (21.016969951s)
	I0520 10:41:01.785503   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 10:41:02.300389   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-570850-m02 minikube.k8s.io/updated_at=2024_05_20T10_41_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=ha-570850 minikube.k8s.io/primary=false
	I0520 10:41:02.438024   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-570850-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 10:41:02.629534   22116 start.go:318] duration metric: took 22.022247037s to joinCluster
	I0520 10:41:02.629609   22116 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:41:02.631461   22116 out.go:177] * Verifying Kubernetes components...
	I0520 10:41:02.629947   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:41:02.632826   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:41:02.886193   22116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:41:02.986524   22116 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:41:02.986857   22116 kapi.go:59] client config for ha-570850: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.crt", KeyFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key", CAFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 10:41:02.986938   22116 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.152:8443
	I0520 10:41:02.987194   22116 node_ready.go:35] waiting up to 6m0s for node "ha-570850-m02" to be "Ready" ...
	I0520 10:41:02.987280   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:02.987289   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:02.987300   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:02.987306   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:02.999872   22116 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0520 10:41:03.488109   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:03.488131   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:03.488140   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:03.488145   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:03.491335   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:03.987444   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:03.987463   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:03.987471   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:03.987476   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:03.991307   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:04.487477   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:04.487499   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:04.487506   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:04.487509   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:04.491467   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:04.987667   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:04.987693   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:04.987704   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:04.987711   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:04.991266   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:04.992173   22116 node_ready.go:53] node "ha-570850-m02" has status "Ready":"False"
	I0520 10:41:05.487468   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:05.487490   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:05.487498   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:05.487503   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:05.490924   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:05.988316   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:05.988344   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:05.988354   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:05.988363   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:05.991871   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:06.488036   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:06.488061   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:06.488074   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:06.488080   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:06.492192   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:41:06.988297   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:06.988321   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:06.988330   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:06.988334   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:06.991458   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:06.992306   22116 node_ready.go:53] node "ha-570850-m02" has status "Ready":"False"
	I0520 10:41:07.487624   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:07.487645   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:07.487653   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:07.487656   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:07.495416   22116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 10:41:07.987398   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:07.987421   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:07.987428   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:07.987433   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:07.990508   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:08.488365   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:08.488387   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:08.488396   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:08.488400   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:08.491732   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:08.987824   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:08.987848   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:08.987857   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:08.987861   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:08.991307   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:09.487360   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:09.487384   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:09.487393   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:09.487399   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:09.490706   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:09.491633   22116 node_ready.go:53] node "ha-570850-m02" has status "Ready":"False"
	I0520 10:41:09.987744   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:09.987774   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:09.987782   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:09.987785   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:09.996145   22116 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 10:41:10.488253   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:10.488275   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:10.488285   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:10.488291   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:10.491665   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:10.987847   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:10.987872   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:10.987883   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:10.987889   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:10.990906   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:10.991448   22116 node_ready.go:49] node "ha-570850-m02" has status "Ready":"True"
	I0520 10:41:10.991462   22116 node_ready.go:38] duration metric: took 8.004248686s for node "ha-570850-m02" to be "Ready" ...
	I0520 10:41:10.991470   22116 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:41:10.991532   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:41:10.991542   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:10.991549   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:10.991552   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:10.996215   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:41:11.001955   22116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jw77g" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.002039   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jw77g
	I0520 10:41:11.002050   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.002057   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.002067   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.004771   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.005380   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:11.005395   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.005401   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.005404   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.007919   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.008395   22116 pod_ready.go:92] pod "coredns-7db6d8ff4d-jw77g" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:11.008409   22116 pod_ready.go:81] duration metric: took 6.432561ms for pod "coredns-7db6d8ff4d-jw77g" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.008416   22116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xr965" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.008456   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xr965
	I0520 10:41:11.008463   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.008470   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.008476   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.011121   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.011647   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:11.011661   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.011668   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.011671   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.013872   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.014303   22116 pod_ready.go:92] pod "coredns-7db6d8ff4d-xr965" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:11.014318   22116 pod_ready.go:81] duration metric: took 5.893335ms for pod "coredns-7db6d8ff4d-xr965" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.014327   22116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.014365   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850
	I0520 10:41:11.014372   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.014378   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.014382   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.016490   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.017031   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:11.017043   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.017055   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.017059   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.019061   22116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 10:41:11.019599   22116 pod_ready.go:92] pod "etcd-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:11.019616   22116 pod_ready.go:81] duration metric: took 5.284495ms for pod "etcd-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.019623   22116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.019662   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:11.019669   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.019677   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.019681   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.022317   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.022962   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:11.022979   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.022989   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.022995   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.025524   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.520515   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:11.520536   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.520543   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.520546   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.523892   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:11.524481   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:11.524494   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.524500   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.524504   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.527190   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:12.020783   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:12.020817   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:12.020828   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:12.020833   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:12.023936   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:12.024469   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:12.024484   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:12.024491   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:12.024495   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:12.027017   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:12.520532   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:12.520557   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:12.520565   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:12.520570   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:12.523743   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:12.524556   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:12.524571   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:12.524580   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:12.524585   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:12.527344   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:13.020163   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:13.020183   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:13.020189   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:13.020192   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:13.023756   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:13.025120   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:13.025135   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:13.025142   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:13.025148   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:13.027603   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:13.028144   22116 pod_ready.go:102] pod "etcd-ha-570850-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 10:41:13.520507   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:13.520528   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:13.520536   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:13.520539   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:13.524399   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:13.525041   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:13.525065   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:13.525072   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:13.525075   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:13.527746   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:14.020394   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:14.020418   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:14.020428   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:14.020433   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:14.023994   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:14.024536   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:14.024549   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:14.024559   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:14.024568   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:14.027063   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:14.519855   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:14.519876   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:14.519884   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:14.519888   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:14.523369   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:14.524061   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:14.524082   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:14.524093   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:14.524097   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:14.526791   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:15.019892   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:15.019914   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.019922   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.019926   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.023048   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:15.023665   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:15.023679   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.023687   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.023692   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.026213   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:15.026750   22116 pod_ready.go:92] pod "etcd-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:15.026766   22116 pod_ready.go:81] duration metric: took 4.007136695s for pod "etcd-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.026779   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.026825   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850
	I0520 10:41:15.026833   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.026840   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.026844   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.029376   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:15.030222   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:15.030238   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.030249   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.030256   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.032576   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:15.033104   22116 pod_ready.go:92] pod "kube-apiserver-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:15.033120   22116 pod_ready.go:81] duration metric: took 6.335034ms for pod "kube-apiserver-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.033129   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.033172   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850-m02
	I0520 10:41:15.033180   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.033187   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.033190   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.035714   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:15.036374   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:15.036387   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.036395   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.036398   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.038750   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:15.039299   22116 pod_ready.go:92] pod "kube-apiserver-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:15.039314   22116 pod_ready.go:81] duration metric: took 6.179289ms for pod "kube-apiserver-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.039322   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.188708   22116 request.go:629] Waited for 149.327519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850
	I0520 10:41:15.188789   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850
	I0520 10:41:15.188797   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.188808   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.188814   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.192307   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:15.388608   22116 request.go:629] Waited for 195.362547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:15.388671   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:15.388678   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.388686   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.388690   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.392347   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:15.393621   22116 pod_ready.go:92] pod "kube-controller-manager-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:15.393639   22116 pod_ready.go:81] duration metric: took 354.309662ms for pod "kube-controller-manager-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.393649   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.588658   22116 request.go:629] Waited for 194.942384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:15.588741   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:15.588767   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.588778   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.588787   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.592186   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:15.788347   22116 request.go:629] Waited for 195.27123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:15.788423   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:15.788430   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.788441   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.788448   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.791760   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:15.988640   22116 request.go:629] Waited for 94.336306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:15.988715   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:15.988726   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.988746   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.988755   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.991688   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:16.188778   22116 request.go:629] Waited for 196.359348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:16.188840   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:16.188851   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:16.188862   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:16.188872   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:16.191815   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:16.394498   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:16.394540   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:16.394549   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:16.394553   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:16.398027   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:16.588304   22116 request.go:629] Waited for 189.366125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:16.588373   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:16.588381   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:16.588388   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:16.588396   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:16.590853   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:16.894322   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:16.894342   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:16.894349   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:16.894353   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:16.897963   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:16.988073   22116 request.go:629] Waited for 89.246261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:16.988133   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:16.988139   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:16.988146   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:16.988149   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:16.991229   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:17.394611   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:17.394632   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:17.394639   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:17.394641   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:17.397561   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:17.398153   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:17.398167   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:17.398174   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:17.398182   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:17.400905   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:17.401501   22116 pod_ready.go:102] pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 10:41:17.893960   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:17.893987   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:17.893998   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:17.894003   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:17.898131   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:41:17.899794   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:17.899810   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:17.899818   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:17.899823   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:17.905616   22116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 10:41:17.906773   22116 pod_ready.go:92] pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:17.906800   22116 pod_ready.go:81] duration metric: took 2.51314231s for pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:17.906813   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6s5x7" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:17.988132   22116 request.go:629] Waited for 81.255115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6s5x7
	I0520 10:41:17.988191   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6s5x7
	I0520 10:41:17.988200   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:17.988211   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:17.988220   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:17.991659   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:18.188696   22116 request.go:629] Waited for 196.387999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:18.188759   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:18.188766   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:18.188779   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:18.188787   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:18.193609   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:41:18.194439   22116 pod_ready.go:92] pod "kube-proxy-6s5x7" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:18.194461   22116 pod_ready.go:81] duration metric: took 287.637901ms for pod "kube-proxy-6s5x7" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:18.194472   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dnhm6" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:18.388407   22116 request.go:629] Waited for 193.871401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnhm6
	I0520 10:41:18.388480   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnhm6
	I0520 10:41:18.388487   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:18.388497   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:18.388502   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:18.391681   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:18.588632   22116 request.go:629] Waited for 196.074265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:18.588698   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:18.588703   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:18.588710   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:18.588714   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:18.591868   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:18.592826   22116 pod_ready.go:92] pod "kube-proxy-dnhm6" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:18.592844   22116 pod_ready.go:81] duration metric: took 398.362862ms for pod "kube-proxy-dnhm6" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:18.592855   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:18.788777   22116 request.go:629] Waited for 195.860567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850
	I0520 10:41:18.788834   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850
	I0520 10:41:18.788839   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:18.788846   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:18.788850   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:18.791832   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:18.988873   22116 request.go:629] Waited for 196.332538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:18.988921   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:18.988933   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:18.988940   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:18.988944   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:18.992467   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:18.993478   22116 pod_ready.go:92] pod "kube-scheduler-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:18.993499   22116 pod_ready.go:81] duration metric: took 400.633573ms for pod "kube-scheduler-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:18.993512   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:19.188534   22116 request.go:629] Waited for 194.962605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850-m02
	I0520 10:41:19.188612   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850-m02
	I0520 10:41:19.188623   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:19.188632   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:19.188643   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:19.192357   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:19.388374   22116 request.go:629] Waited for 195.367276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:19.388425   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:19.388429   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:19.388436   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:19.388440   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:19.391758   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:19.392455   22116 pod_ready.go:92] pod "kube-scheduler-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:19.392473   22116 pod_ready.go:81] duration metric: took 398.952385ms for pod "kube-scheduler-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:19.392486   22116 pod_ready.go:38] duration metric: took 8.401004615s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:41:19.392505   22116 api_server.go:52] waiting for apiserver process to appear ...
	I0520 10:41:19.392562   22116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:41:19.415039   22116 api_server.go:72] duration metric: took 16.785398633s to wait for apiserver process to appear ...
	I0520 10:41:19.415062   22116 api_server.go:88] waiting for apiserver healthz status ...
	I0520 10:41:19.415077   22116 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0520 10:41:19.420981   22116 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I0520 10:41:19.421055   22116 round_trippers.go:463] GET https://192.168.39.152:8443/version
	I0520 10:41:19.421068   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:19.421077   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:19.421081   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:19.421914   22116 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0520 10:41:19.422069   22116 api_server.go:141] control plane version: v1.30.1
	I0520 10:41:19.422091   22116 api_server.go:131] duration metric: took 7.02303ms to wait for apiserver health ...
	I0520 10:41:19.422100   22116 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 10:41:19.588512   22116 request.go:629] Waited for 166.343985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:41:19.588594   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:41:19.588602   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:19.588612   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:19.588623   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:19.593489   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:41:19.598445   22116 system_pods.go:59] 17 kube-system pods found
	I0520 10:41:19.598470   22116 system_pods.go:61] "coredns-7db6d8ff4d-jw77g" [85653365-561a-4097-bda9-77121fd12b00] Running
	I0520 10:41:19.598476   22116 system_pods.go:61] "coredns-7db6d8ff4d-xr965" [e23a9ea0-3c6a-405a-9e09-d814ca8adeb0] Running
	I0520 10:41:19.598481   22116 system_pods.go:61] "etcd-ha-570850" [552d2142-ba50-4203-a6a0-4825c7cd9a5f] Running
	I0520 10:41:19.598486   22116 system_pods.go:61] "etcd-ha-570850-m02" [67bcc993-83b3-4331-86fe-dcc527947352] Running
	I0520 10:41:19.598490   22116 system_pods.go:61] "kindnet-2zphj" [2ddea332-9ab2-445d-a26e-148ad8cf9a77] Running
	I0520 10:41:19.598501   22116 system_pods.go:61] "kindnet-f5x4s" [7e4a78b0-a069-4013-a80e-6b0fee4c4f8e] Running
	I0520 10:41:19.598506   22116 system_pods.go:61] "kube-apiserver-ha-570850" [1ac0940a-3739-40ee-80f0-77b95690c907] Running
	I0520 10:41:19.598513   22116 system_pods.go:61] "kube-apiserver-ha-570850-m02" [8aa06b87-752a-427f-b2b8-504f609e3151] Running
	I0520 10:41:19.598516   22116 system_pods.go:61] "kube-controller-manager-ha-570850" [b5e7e18e-92bf-4566-8cc0-4333ee2d12aa] Running
	I0520 10:41:19.598521   22116 system_pods.go:61] "kube-controller-manager-ha-570850-m02" [796d4ac0-bf9d-431e-8c1d-f31dde806499] Running
	I0520 10:41:19.598524   22116 system_pods.go:61] "kube-proxy-6s5x7" [424c7748-985a-407a-b778-d57c97016c68] Running
	I0520 10:41:19.598527   22116 system_pods.go:61] "kube-proxy-dnhm6" [3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1] Running
	I0520 10:41:19.598531   22116 system_pods.go:61] "kube-scheduler-ha-570850" [0049e4d9-37d1-41ad-aad5-dbdc12c1af01] Running
	I0520 10:41:19.598535   22116 system_pods.go:61] "kube-scheduler-ha-570850-m02" [b2302685-93c6-477b-b0c5-d3cdb9c6fbc9] Running
	I0520 10:41:19.598538   22116 system_pods.go:61] "kube-vip-ha-570850" [f230fe6d-0d6e-4a3b-bbf8-694d256f1fde] Running
	I0520 10:41:19.598542   22116 system_pods.go:61] "kube-vip-ha-570850-m02" [07f5a1d4-f793-4417-bec6-5d9eab498365] Running
	I0520 10:41:19.598544   22116 system_pods.go:61] "storage-provisioner" [8cbdeaca-631d-449f-bb63-9ee4ec3f7a36] Running
	I0520 10:41:19.598549   22116 system_pods.go:74] duration metric: took 176.444033ms to wait for pod list to return data ...
	I0520 10:41:19.598557   22116 default_sa.go:34] waiting for default service account to be created ...
	I0520 10:41:19.787920   22116 request.go:629] Waited for 189.296478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I0520 10:41:19.787998   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I0520 10:41:19.788003   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:19.788010   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:19.788015   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:19.791012   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:19.791239   22116 default_sa.go:45] found service account: "default"
	I0520 10:41:19.791258   22116 default_sa.go:55] duration metric: took 192.694371ms for default service account to be created ...
	I0520 10:41:19.791267   22116 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 10:41:19.988481   22116 request.go:629] Waited for 197.157713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:41:19.988532   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:41:19.988537   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:19.988544   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:19.988552   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:19.994353   22116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 10:41:19.999194   22116 system_pods.go:86] 17 kube-system pods found
	I0520 10:41:19.999216   22116 system_pods.go:89] "coredns-7db6d8ff4d-jw77g" [85653365-561a-4097-bda9-77121fd12b00] Running
	I0520 10:41:19.999223   22116 system_pods.go:89] "coredns-7db6d8ff4d-xr965" [e23a9ea0-3c6a-405a-9e09-d814ca8adeb0] Running
	I0520 10:41:19.999230   22116 system_pods.go:89] "etcd-ha-570850" [552d2142-ba50-4203-a6a0-4825c7cd9a5f] Running
	I0520 10:41:19.999241   22116 system_pods.go:89] "etcd-ha-570850-m02" [67bcc993-83b3-4331-86fe-dcc527947352] Running
	I0520 10:41:19.999247   22116 system_pods.go:89] "kindnet-2zphj" [2ddea332-9ab2-445d-a26e-148ad8cf9a77] Running
	I0520 10:41:19.999253   22116 system_pods.go:89] "kindnet-f5x4s" [7e4a78b0-a069-4013-a80e-6b0fee4c4f8e] Running
	I0520 10:41:19.999262   22116 system_pods.go:89] "kube-apiserver-ha-570850" [1ac0940a-3739-40ee-80f0-77b95690c907] Running
	I0520 10:41:19.999268   22116 system_pods.go:89] "kube-apiserver-ha-570850-m02" [8aa06b87-752a-427f-b2b8-504f609e3151] Running
	I0520 10:41:19.999273   22116 system_pods.go:89] "kube-controller-manager-ha-570850" [b5e7e18e-92bf-4566-8cc0-4333ee2d12aa] Running
	I0520 10:41:19.999278   22116 system_pods.go:89] "kube-controller-manager-ha-570850-m02" [796d4ac0-bf9d-431e-8c1d-f31dde806499] Running
	I0520 10:41:19.999282   22116 system_pods.go:89] "kube-proxy-6s5x7" [424c7748-985a-407a-b778-d57c97016c68] Running
	I0520 10:41:19.999286   22116 system_pods.go:89] "kube-proxy-dnhm6" [3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1] Running
	I0520 10:41:19.999290   22116 system_pods.go:89] "kube-scheduler-ha-570850" [0049e4d9-37d1-41ad-aad5-dbdc12c1af01] Running
	I0520 10:41:19.999294   22116 system_pods.go:89] "kube-scheduler-ha-570850-m02" [b2302685-93c6-477b-b0c5-d3cdb9c6fbc9] Running
	I0520 10:41:19.999298   22116 system_pods.go:89] "kube-vip-ha-570850" [f230fe6d-0d6e-4a3b-bbf8-694d256f1fde] Running
	I0520 10:41:19.999302   22116 system_pods.go:89] "kube-vip-ha-570850-m02" [07f5a1d4-f793-4417-bec6-5d9eab498365] Running
	I0520 10:41:19.999305   22116 system_pods.go:89] "storage-provisioner" [8cbdeaca-631d-449f-bb63-9ee4ec3f7a36] Running
	I0520 10:41:19.999311   22116 system_pods.go:126] duration metric: took 208.03832ms to wait for k8s-apps to be running ...
	I0520 10:41:19.999319   22116 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 10:41:19.999370   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:41:20.019342   22116 system_svc.go:56] duration metric: took 20.01456ms WaitForService to wait for kubelet
	I0520 10:41:20.019366   22116 kubeadm.go:576] duration metric: took 17.389728118s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:41:20.019383   22116 node_conditions.go:102] verifying NodePressure condition ...
	I0520 10:41:20.188818   22116 request.go:629] Waited for 169.356099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I0520 10:41:20.188884   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes
	I0520 10:41:20.188890   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:20.188896   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:20.188900   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:20.192704   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:20.193489   22116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 10:41:20.193510   22116 node_conditions.go:123] node cpu capacity is 2
	I0520 10:41:20.193521   22116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 10:41:20.193525   22116 node_conditions.go:123] node cpu capacity is 2
	I0520 10:41:20.193529   22116 node_conditions.go:105] duration metric: took 174.142433ms to run NodePressure ...
	I0520 10:41:20.193538   22116 start.go:240] waiting for startup goroutines ...
	I0520 10:41:20.193560   22116 start.go:254] writing updated cluster config ...
	I0520 10:41:20.195606   22116 out.go:177] 
	I0520 10:41:20.196922   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:41:20.197059   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:41:20.198510   22116 out.go:177] * Starting "ha-570850-m03" control-plane node in "ha-570850" cluster
	I0520 10:41:20.199598   22116 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:41:20.199617   22116 cache.go:56] Caching tarball of preloaded images
	I0520 10:41:20.199712   22116 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 10:41:20.199724   22116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:41:20.199822   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:41:20.199985   22116 start.go:360] acquireMachinesLock for ha-570850-m03: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 10:41:20.200036   22116 start.go:364] duration metric: took 30.181µs to acquireMachinesLock for "ha-570850-m03"
	I0520 10:41:20.200060   22116 start.go:93] Provisioning new machine with config: &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:41:20.200161   22116 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0520 10:41:20.201518   22116 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 10:41:20.201601   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:41:20.201651   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:41:20.216212   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0520 10:41:20.216563   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:41:20.217029   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:41:20.217052   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:41:20.217397   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:41:20.217563   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetMachineName
	I0520 10:41:20.217695   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:20.217863   22116 start.go:159] libmachine.API.Create for "ha-570850" (driver="kvm2")
	I0520 10:41:20.217888   22116 client.go:168] LocalClient.Create starting
	I0520 10:41:20.217916   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 10:41:20.217946   22116 main.go:141] libmachine: Decoding PEM data...
	I0520 10:41:20.217959   22116 main.go:141] libmachine: Parsing certificate...
	I0520 10:41:20.218011   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 10:41:20.218029   22116 main.go:141] libmachine: Decoding PEM data...
	I0520 10:41:20.218046   22116 main.go:141] libmachine: Parsing certificate...
	I0520 10:41:20.218063   22116 main.go:141] libmachine: Running pre-create checks...
	I0520 10:41:20.218071   22116 main.go:141] libmachine: (ha-570850-m03) Calling .PreCreateCheck
	I0520 10:41:20.218242   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetConfigRaw
	I0520 10:41:20.218600   22116 main.go:141] libmachine: Creating machine...
	I0520 10:41:20.218614   22116 main.go:141] libmachine: (ha-570850-m03) Calling .Create
	I0520 10:41:20.218735   22116 main.go:141] libmachine: (ha-570850-m03) Creating KVM machine...
	I0520 10:41:20.219883   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found existing default KVM network
	I0520 10:41:20.219964   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found existing private KVM network mk-ha-570850
	I0520 10:41:20.220062   22116 main.go:141] libmachine: (ha-570850-m03) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03 ...
	I0520 10:41:20.220085   22116 main.go:141] libmachine: (ha-570850-m03) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 10:41:20.220143   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:20.220042   23111 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:41:20.220223   22116 main.go:141] libmachine: (ha-570850-m03) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 10:41:20.430926   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:20.430801   23111 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa...
	I0520 10:41:20.731910   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:20.731810   23111 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/ha-570850-m03.rawdisk...
	I0520 10:41:20.731937   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Writing magic tar header
	I0520 10:41:20.731950   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Writing SSH key tar header
	I0520 10:41:20.731962   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:20.731914   23111 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03 ...
	I0520 10:41:20.732013   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03
	I0520 10:41:20.732048   22116 main.go:141] libmachine: (ha-570850-m03) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03 (perms=drwx------)
	I0520 10:41:20.732079   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 10:41:20.732092   22116 main.go:141] libmachine: (ha-570850-m03) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 10:41:20.732099   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:41:20.732111   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 10:41:20.732118   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 10:41:20.732129   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home/jenkins
	I0520 10:41:20.732137   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home
	I0520 10:41:20.732153   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Skipping /home - not owner
	I0520 10:41:20.732167   22116 main.go:141] libmachine: (ha-570850-m03) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 10:41:20.732186   22116 main.go:141] libmachine: (ha-570850-m03) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 10:41:20.732196   22116 main.go:141] libmachine: (ha-570850-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 10:41:20.732201   22116 main.go:141] libmachine: (ha-570850-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 10:41:20.732209   22116 main.go:141] libmachine: (ha-570850-m03) Creating domain...
	I0520 10:41:20.733237   22116 main.go:141] libmachine: (ha-570850-m03) define libvirt domain using xml: 
	I0520 10:41:20.733252   22116 main.go:141] libmachine: (ha-570850-m03) <domain type='kvm'>
	I0520 10:41:20.733260   22116 main.go:141] libmachine: (ha-570850-m03)   <name>ha-570850-m03</name>
	I0520 10:41:20.733268   22116 main.go:141] libmachine: (ha-570850-m03)   <memory unit='MiB'>2200</memory>
	I0520 10:41:20.733295   22116 main.go:141] libmachine: (ha-570850-m03)   <vcpu>2</vcpu>
	I0520 10:41:20.733318   22116 main.go:141] libmachine: (ha-570850-m03)   <features>
	I0520 10:41:20.733330   22116 main.go:141] libmachine: (ha-570850-m03)     <acpi/>
	I0520 10:41:20.733340   22116 main.go:141] libmachine: (ha-570850-m03)     <apic/>
	I0520 10:41:20.733349   22116 main.go:141] libmachine: (ha-570850-m03)     <pae/>
	I0520 10:41:20.733361   22116 main.go:141] libmachine: (ha-570850-m03)     
	I0520 10:41:20.733372   22116 main.go:141] libmachine: (ha-570850-m03)   </features>
	I0520 10:41:20.733378   22116 main.go:141] libmachine: (ha-570850-m03)   <cpu mode='host-passthrough'>
	I0520 10:41:20.733391   22116 main.go:141] libmachine: (ha-570850-m03)   
	I0520 10:41:20.733404   22116 main.go:141] libmachine: (ha-570850-m03)   </cpu>
	I0520 10:41:20.733416   22116 main.go:141] libmachine: (ha-570850-m03)   <os>
	I0520 10:41:20.733427   22116 main.go:141] libmachine: (ha-570850-m03)     <type>hvm</type>
	I0520 10:41:20.733439   22116 main.go:141] libmachine: (ha-570850-m03)     <boot dev='cdrom'/>
	I0520 10:41:20.733449   22116 main.go:141] libmachine: (ha-570850-m03)     <boot dev='hd'/>
	I0520 10:41:20.733460   22116 main.go:141] libmachine: (ha-570850-m03)     <bootmenu enable='no'/>
	I0520 10:41:20.733465   22116 main.go:141] libmachine: (ha-570850-m03)   </os>
	I0520 10:41:20.733475   22116 main.go:141] libmachine: (ha-570850-m03)   <devices>
	I0520 10:41:20.733492   22116 main.go:141] libmachine: (ha-570850-m03)     <disk type='file' device='cdrom'>
	I0520 10:41:20.733508   22116 main.go:141] libmachine: (ha-570850-m03)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/boot2docker.iso'/>
	I0520 10:41:20.733519   22116 main.go:141] libmachine: (ha-570850-m03)       <target dev='hdc' bus='scsi'/>
	I0520 10:41:20.733528   22116 main.go:141] libmachine: (ha-570850-m03)       <readonly/>
	I0520 10:41:20.733538   22116 main.go:141] libmachine: (ha-570850-m03)     </disk>
	I0520 10:41:20.733548   22116 main.go:141] libmachine: (ha-570850-m03)     <disk type='file' device='disk'>
	I0520 10:41:20.733556   22116 main.go:141] libmachine: (ha-570850-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 10:41:20.733570   22116 main.go:141] libmachine: (ha-570850-m03)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/ha-570850-m03.rawdisk'/>
	I0520 10:41:20.733586   22116 main.go:141] libmachine: (ha-570850-m03)       <target dev='hda' bus='virtio'/>
	I0520 10:41:20.733597   22116 main.go:141] libmachine: (ha-570850-m03)     </disk>
	I0520 10:41:20.733605   22116 main.go:141] libmachine: (ha-570850-m03)     <interface type='network'>
	I0520 10:41:20.733620   22116 main.go:141] libmachine: (ha-570850-m03)       <source network='mk-ha-570850'/>
	I0520 10:41:20.733630   22116 main.go:141] libmachine: (ha-570850-m03)       <model type='virtio'/>
	I0520 10:41:20.733639   22116 main.go:141] libmachine: (ha-570850-m03)     </interface>
	I0520 10:41:20.733646   22116 main.go:141] libmachine: (ha-570850-m03)     <interface type='network'>
	I0520 10:41:20.733674   22116 main.go:141] libmachine: (ha-570850-m03)       <source network='default'/>
	I0520 10:41:20.733696   22116 main.go:141] libmachine: (ha-570850-m03)       <model type='virtio'/>
	I0520 10:41:20.733710   22116 main.go:141] libmachine: (ha-570850-m03)     </interface>
	I0520 10:41:20.733721   22116 main.go:141] libmachine: (ha-570850-m03)     <serial type='pty'>
	I0520 10:41:20.733734   22116 main.go:141] libmachine: (ha-570850-m03)       <target port='0'/>
	I0520 10:41:20.733745   22116 main.go:141] libmachine: (ha-570850-m03)     </serial>
	I0520 10:41:20.733755   22116 main.go:141] libmachine: (ha-570850-m03)     <console type='pty'>
	I0520 10:41:20.733764   22116 main.go:141] libmachine: (ha-570850-m03)       <target type='serial' port='0'/>
	I0520 10:41:20.733785   22116 main.go:141] libmachine: (ha-570850-m03)     </console>
	I0520 10:41:20.733804   22116 main.go:141] libmachine: (ha-570850-m03)     <rng model='virtio'>
	I0520 10:41:20.733819   22116 main.go:141] libmachine: (ha-570850-m03)       <backend model='random'>/dev/random</backend>
	I0520 10:41:20.733826   22116 main.go:141] libmachine: (ha-570850-m03)     </rng>
	I0520 10:41:20.733833   22116 main.go:141] libmachine: (ha-570850-m03)     
	I0520 10:41:20.733842   22116 main.go:141] libmachine: (ha-570850-m03)     
	I0520 10:41:20.733854   22116 main.go:141] libmachine: (ha-570850-m03)   </devices>
	I0520 10:41:20.733865   22116 main.go:141] libmachine: (ha-570850-m03) </domain>
	I0520 10:41:20.733883   22116 main.go:141] libmachine: (ha-570850-m03) 
	I0520 10:41:20.740500   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:ed:f4:db in network default
	I0520 10:41:20.741087   22116 main.go:141] libmachine: (ha-570850-m03) Ensuring networks are active...
	I0520 10:41:20.741104   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:20.741829   22116 main.go:141] libmachine: (ha-570850-m03) Ensuring network default is active
	I0520 10:41:20.742208   22116 main.go:141] libmachine: (ha-570850-m03) Ensuring network mk-ha-570850 is active
	I0520 10:41:20.742551   22116 main.go:141] libmachine: (ha-570850-m03) Getting domain xml...
	I0520 10:41:20.743245   22116 main.go:141] libmachine: (ha-570850-m03) Creating domain...
	I0520 10:41:21.960333   22116 main.go:141] libmachine: (ha-570850-m03) Waiting to get IP...
	I0520 10:41:21.961050   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:21.961460   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:21.961514   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:21.961439   23111 retry.go:31] will retry after 268.018752ms: waiting for machine to come up
	I0520 10:41:22.230979   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:22.231446   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:22.231470   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:22.231400   23111 retry.go:31] will retry after 287.26263ms: waiting for machine to come up
	I0520 10:41:22.519701   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:22.520118   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:22.520142   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:22.520074   23111 retry.go:31] will retry after 324.707914ms: waiting for machine to come up
	I0520 10:41:22.846403   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:22.846978   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:22.847005   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:22.846933   23111 retry.go:31] will retry after 379.38423ms: waiting for machine to come up
	I0520 10:41:23.227427   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:23.227894   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:23.227922   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:23.227853   23111 retry.go:31] will retry after 502.147306ms: waiting for machine to come up
	I0520 10:41:23.731472   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:23.731950   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:23.731980   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:23.731893   23111 retry.go:31] will retry after 867.897028ms: waiting for machine to come up
	I0520 10:41:24.601641   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:24.602172   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:24.602200   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:24.602129   23111 retry.go:31] will retry after 758.230217ms: waiting for machine to come up
	I0520 10:41:25.361833   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:25.362238   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:25.362264   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:25.362202   23111 retry.go:31] will retry after 1.041261872s: waiting for machine to come up
	I0520 10:41:26.405315   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:26.405801   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:26.405831   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:26.405749   23111 retry.go:31] will retry after 1.248321269s: waiting for machine to come up
	I0520 10:41:27.655273   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:27.655670   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:27.655692   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:27.655654   23111 retry.go:31] will retry after 1.816923512s: waiting for machine to come up
	I0520 10:41:29.474155   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:29.474630   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:29.474652   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:29.474580   23111 retry.go:31] will retry after 1.809298154s: waiting for machine to come up
	I0520 10:41:31.285822   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:31.286229   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:31.286253   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:31.286179   23111 retry.go:31] will retry after 2.419090022s: waiting for machine to come up
	I0520 10:41:33.707009   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:33.707672   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:33.707698   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:33.707629   23111 retry.go:31] will retry after 4.274360948s: waiting for machine to come up
	I0520 10:41:37.983834   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:37.984218   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:37.984240   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:37.984182   23111 retry.go:31] will retry after 3.561976174s: waiting for machine to come up
	I0520 10:41:41.547233   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.547761   22116 main.go:141] libmachine: (ha-570850-m03) Found IP for machine: 192.168.39.240
	I0520 10:41:41.547791   22116 main.go:141] libmachine: (ha-570850-m03) Reserving static IP address...
	I0520 10:41:41.547806   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has current primary IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.548185   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find host DHCP lease matching {name: "ha-570850-m03", mac: "52:54:00:7b:76:2b", ip: "192.168.39.240"} in network mk-ha-570850
	I0520 10:41:41.621513   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Getting to WaitForSSH function...
	I0520 10:41:41.621537   22116 main.go:141] libmachine: (ha-570850-m03) Reserved static IP address: 192.168.39.240
	I0520 10:41:41.621548   22116 main.go:141] libmachine: (ha-570850-m03) Waiting for SSH to be available...
	I0520 10:41:41.624355   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.624794   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:41.624816   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.624978   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Using SSH client type: external
	I0520 10:41:41.625019   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa (-rw-------)
	I0520 10:41:41.625049   22116 main.go:141] libmachine: (ha-570850-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 10:41:41.625059   22116 main.go:141] libmachine: (ha-570850-m03) DBG | About to run SSH command:
	I0520 10:41:41.625070   22116 main.go:141] libmachine: (ha-570850-m03) DBG | exit 0
	I0520 10:41:41.752972   22116 main.go:141] libmachine: (ha-570850-m03) DBG | SSH cmd err, output: <nil>: 
	I0520 10:41:41.753241   22116 main.go:141] libmachine: (ha-570850-m03) KVM machine creation complete!
	I0520 10:41:41.753527   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetConfigRaw
	I0520 10:41:41.754036   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:41.754199   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:41.754368   22116 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 10:41:41.754382   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetState
	I0520 10:41:41.755637   22116 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 10:41:41.755651   22116 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 10:41:41.755656   22116 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 10:41:41.755661   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:41.758103   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.758459   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:41.758486   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.758625   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:41.758789   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:41.758970   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:41.759110   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:41.759279   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:41:41.759468   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0520 10:41:41.759478   22116 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 10:41:41.868268   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:41:41.868294   22116 main.go:141] libmachine: Detecting the provisioner...
	I0520 10:41:41.868304   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:41.871139   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.871529   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:41.871575   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.871670   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:41.871879   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:41.872058   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:41.872232   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:41.872412   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:41:41.872619   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0520 10:41:41.872632   22116 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 10:41:41.985643   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 10:41:41.985714   22116 main.go:141] libmachine: found compatible host: buildroot
	I0520 10:41:41.985728   22116 main.go:141] libmachine: Provisioning with buildroot...
	I0520 10:41:41.985740   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetMachineName
	I0520 10:41:41.985999   22116 buildroot.go:166] provisioning hostname "ha-570850-m03"
	I0520 10:41:41.986029   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetMachineName
	I0520 10:41:41.986232   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:41.988748   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.989221   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:41.989258   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.989431   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:41.989622   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:41.989776   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:41.989952   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:41.990095   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:41:41.990291   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0520 10:41:41.990306   22116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-570850-m03 && echo "ha-570850-m03" | sudo tee /etc/hostname
	I0520 10:41:42.116021   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850-m03
	
	I0520 10:41:42.116051   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:42.118763   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.119103   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.119133   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.119313   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:42.119513   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:42.119666   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:42.119801   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:42.119958   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:41:42.120137   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0520 10:41:42.120158   22116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-570850-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-570850-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-570850-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:41:42.238040   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:41:42.238073   22116 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 10:41:42.238089   22116 buildroot.go:174] setting up certificates
	I0520 10:41:42.238119   22116 provision.go:84] configureAuth start
	I0520 10:41:42.238130   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetMachineName
	I0520 10:41:42.238390   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:41:42.241156   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.241500   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.241522   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.241693   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:42.243860   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.244276   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.244305   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.244419   22116 provision.go:143] copyHostCerts
	I0520 10:41:42.244453   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:41:42.244494   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 10:41:42.244505   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:41:42.244596   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 10:41:42.244679   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:41:42.244704   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 10:41:42.244714   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:41:42.244750   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 10:41:42.244804   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:41:42.244829   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 10:41:42.244838   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:41:42.244873   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 10:41:42.244954   22116 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.ha-570850-m03 san=[127.0.0.1 192.168.39.240 ha-570850-m03 localhost minikube]
	I0520 10:41:42.416951   22116 provision.go:177] copyRemoteCerts
	I0520 10:41:42.417012   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:41:42.417040   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:42.419714   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.420020   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.420048   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.420233   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:42.420401   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:42.420542   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:42.420646   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:41:42.508180   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 10:41:42.508245   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:41:42.538008   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 10:41:42.538081   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 10:41:42.562031   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 10:41:42.562115   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 10:41:42.588098   22116 provision.go:87] duration metric: took 349.962755ms to configureAuth
	I0520 10:41:42.588126   22116 buildroot.go:189] setting minikube options for container-runtime
	I0520 10:41:42.588323   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:41:42.588402   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:42.590889   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.591250   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.591304   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.591423   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:42.591608   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:42.591765   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:42.591914   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:42.592034   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:41:42.592200   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0520 10:41:42.592219   22116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:41:42.865113   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:41:42.865155   22116 main.go:141] libmachine: Checking connection to Docker...
	I0520 10:41:42.865166   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetURL
	I0520 10:41:42.866490   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Using libvirt version 6000000
	I0520 10:41:42.869020   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.869337   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.869376   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.869565   22116 main.go:141] libmachine: Docker is up and running!
	I0520 10:41:42.869583   22116 main.go:141] libmachine: Reticulating splines...
	I0520 10:41:42.869592   22116 client.go:171] duration metric: took 22.651695288s to LocalClient.Create
	I0520 10:41:42.869618   22116 start.go:167] duration metric: took 22.651753943s to libmachine.API.Create "ha-570850"
	I0520 10:41:42.869628   22116 start.go:293] postStartSetup for "ha-570850-m03" (driver="kvm2")
	I0520 10:41:42.869641   22116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:41:42.869678   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:42.869908   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:41:42.869949   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:42.872077   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.872476   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.872502   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.872654   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:42.872816   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:42.872998   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:42.873145   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:41:42.959873   22116 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:41:42.964663   22116 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 10:41:42.964686   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 10:41:42.964773   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 10:41:42.964872   22116 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 10:41:42.964883   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /etc/ssl/certs/111622.pem
	I0520 10:41:42.965018   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 10:41:42.975890   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:41:43.001297   22116 start.go:296] duration metric: took 131.655107ms for postStartSetup
	I0520 10:41:43.001352   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetConfigRaw
	I0520 10:41:43.001902   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:41:43.004488   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.004878   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:43.004909   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.005294   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:41:43.005502   22116 start.go:128] duration metric: took 22.805329464s to createHost
	I0520 10:41:43.005551   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:43.007733   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.008132   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:43.008164   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.008297   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:43.008470   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:43.008628   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:43.008817   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:43.008989   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:41:43.009162   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0520 10:41:43.009175   22116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 10:41:43.121728   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716201703.095618454
	
	I0520 10:41:43.121746   22116 fix.go:216] guest clock: 1716201703.095618454
	I0520 10:41:43.121753   22116 fix.go:229] Guest: 2024-05-20 10:41:43.095618454 +0000 UTC Remote: 2024-05-20 10:41:43.005516012 +0000 UTC m=+211.310919977 (delta=90.102442ms)
	I0520 10:41:43.121766   22116 fix.go:200] guest clock delta is within tolerance: 90.102442ms
	I0520 10:41:43.121771   22116 start.go:83] releasing machines lock for "ha-570850-m03", held for 22.921724286s
	I0520 10:41:43.121806   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:43.122104   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:41:43.124936   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.125272   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:43.125303   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.127628   22116 out.go:177] * Found network options:
	I0520 10:41:43.128949   22116 out.go:177]   - NO_PROXY=192.168.39.152,192.168.39.111
	W0520 10:41:43.130231   22116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 10:41:43.130253   22116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 10:41:43.130267   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:43.130888   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:43.131100   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:43.131199   22116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:41:43.131241   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	W0520 10:41:43.131287   22116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 10:41:43.131308   22116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 10:41:43.131366   22116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:41:43.131389   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:43.133884   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.134148   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.134379   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:43.134405   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.134436   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:43.134613   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:43.134618   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:43.134651   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.134786   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:43.134819   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:43.134998   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:43.134989   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:41:43.135180   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:43.135326   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:41:43.376620   22116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 10:41:43.382897   22116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 10:41:43.382956   22116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:41:43.399669   22116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 10:41:43.399698   22116 start.go:494] detecting cgroup driver to use...
	I0520 10:41:43.399747   22116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:41:43.419089   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:41:43.433639   22116 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:41:43.433698   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:41:43.446764   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:41:43.460806   22116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:41:43.595635   22116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:41:43.758034   22116 docker.go:233] disabling docker service ...
	I0520 10:41:43.758111   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:41:43.771659   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:41:43.784975   22116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:41:43.922908   22116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:41:44.047300   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:41:44.061537   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:41:44.080731   22116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:41:44.080783   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.090964   22116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:41:44.091021   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.100922   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.110659   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.120519   22116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:41:44.132082   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.142086   22116 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.160922   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.171514   22116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:41:44.181331   22116 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 10:41:44.181373   22116 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 10:41:44.194162   22116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:41:44.203326   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:41:44.319558   22116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:41:44.467225   22116 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:41:44.467297   22116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:41:44.473349   22116 start.go:562] Will wait 60s for crictl version
	I0520 10:41:44.473408   22116 ssh_runner.go:195] Run: which crictl
	I0520 10:41:44.477055   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:41:44.518961   22116 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 10:41:44.519038   22116 ssh_runner.go:195] Run: crio --version
	I0520 10:41:44.549199   22116 ssh_runner.go:195] Run: crio --version
	I0520 10:41:44.582808   22116 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 10:41:44.584239   22116 out.go:177]   - env NO_PROXY=192.168.39.152
	I0520 10:41:44.585651   22116 out.go:177]   - env NO_PROXY=192.168.39.152,192.168.39.111
	I0520 10:41:44.586922   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:41:44.589867   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:44.590250   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:44.590275   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:44.590465   22116 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 10:41:44.594866   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:41:44.607801   22116 mustload.go:65] Loading cluster: ha-570850
	I0520 10:41:44.607995   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:41:44.608230   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:41:44.608267   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:41:44.622461   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
	I0520 10:41:44.622817   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:41:44.623230   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:41:44.623250   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:41:44.623538   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:41:44.623725   22116 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:41:44.625308   22116 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:41:44.625651   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:41:44.625685   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:41:44.639873   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43981
	I0520 10:41:44.640316   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:41:44.640787   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:41:44.640806   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:41:44.641123   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:41:44.641320   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:41:44.641461   22116 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850 for IP: 192.168.39.240
	I0520 10:41:44.641472   22116 certs.go:194] generating shared ca certs ...
	I0520 10:41:44.641484   22116 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:41:44.641592   22116 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 10:41:44.641628   22116 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 10:41:44.641637   22116 certs.go:256] generating profile certs ...
	I0520 10:41:44.641699   22116 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key
	I0520 10:41:44.641722   22116 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.8e7a634d
	I0520 10:41:44.641737   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.8e7a634d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.111 192.168.39.240 192.168.39.254]
	I0520 10:41:44.747823   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.8e7a634d ...
	I0520 10:41:44.747850   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.8e7a634d: {Name:mk750edf2e0080dc0e237e62210a814352975f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:41:44.748006   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.8e7a634d ...
	I0520 10:41:44.748018   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.8e7a634d: {Name:mkdfc7e4adc2c900ccc75ceac6cc66de69332fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:41:44.748089   22116 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.8e7a634d -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt
	I0520 10:41:44.748215   22116 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.8e7a634d -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key
	I0520 10:41:44.748339   22116 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key
	I0520 10:41:44.748353   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 10:41:44.748369   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 10:41:44.748381   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 10:41:44.748393   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 10:41:44.748405   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 10:41:44.748417   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 10:41:44.748429   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 10:41:44.748440   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 10:41:44.748486   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 10:41:44.748511   22116 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 10:41:44.748520   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 10:41:44.748542   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:41:44.748563   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:41:44.748583   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 10:41:44.748617   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:41:44.748643   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:41:44.748657   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem -> /usr/share/ca-certificates/11162.pem
	I0520 10:41:44.748670   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /usr/share/ca-certificates/111622.pem
	I0520 10:41:44.748699   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:41:44.751816   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:41:44.752187   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:41:44.752215   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:41:44.752414   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:41:44.752605   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:41:44.752811   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:41:44.752978   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:41:44.829239   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 10:41:44.835382   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 10:41:44.848371   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 10:41:44.853059   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0520 10:41:44.863419   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 10:41:44.867919   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 10:41:44.879256   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 10:41:44.883516   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0520 10:41:44.894057   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 10:41:44.898578   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 10:41:44.910264   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 10:41:44.915872   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0520 10:41:44.926900   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:41:44.952560   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:41:44.980607   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:41:45.005074   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 10:41:45.033449   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0520 10:41:45.060496   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 10:41:45.085131   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:41:45.109723   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:41:45.135217   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:41:45.159646   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 10:41:45.186572   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 10:41:45.213043   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 10:41:45.230302   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0520 10:41:45.246831   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 10:41:45.264322   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0520 10:41:45.281986   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 10:41:45.298302   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0520 10:41:45.317492   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 10:41:45.336015   22116 ssh_runner.go:195] Run: openssl version
	I0520 10:41:45.341596   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:41:45.352455   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:41:45.356910   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:41:45.356964   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:41:45.362710   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:41:45.373190   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 10:41:45.383648   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 10:41:45.388178   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 10:41:45.388217   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 10:41:45.394008   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 10:41:45.404367   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 10:41:45.414605   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 10:41:45.419035   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 10:41:45.419072   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 10:41:45.425800   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 10:41:45.436078   22116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:41:45.439971   22116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 10:41:45.440012   22116 kubeadm.go:928] updating node {m03 192.168.39.240 8443 v1.30.1 crio true true} ...
	I0520 10:41:45.440093   22116 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-570850-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:41:45.440117   22116 kube-vip.go:115] generating kube-vip config ...
	I0520 10:41:45.440141   22116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 10:41:45.455646   22116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 10:41:45.455729   22116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 10:41:45.455784   22116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:41:45.465614   22116 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 10:41:45.465659   22116 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 10:41:45.475456   22116 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0520 10:41:45.475467   22116 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0520 10:41:45.475484   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 10:41:45.475500   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:41:45.475457   22116 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 10:41:45.475571   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 10:41:45.475554   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 10:41:45.475642   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 10:41:45.490052   22116 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 10:41:45.490088   22116 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 10:41:45.490114   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 10:41:45.490121   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 10:41:45.490090   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 10:41:45.490212   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 10:41:45.501428   22116 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 10:41:45.501467   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 10:41:46.371500   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 10:41:46.381788   22116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0520 10:41:46.399004   22116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:41:46.416922   22116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 10:41:46.433919   22116 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 10:41:46.438011   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:41:46.450512   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:41:46.595259   22116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:41:46.613101   22116 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:41:46.613499   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:41:46.613552   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:41:46.629013   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44267
	I0520 10:41:46.629389   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:41:46.629867   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:41:46.629891   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:41:46.630264   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:41:46.630474   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:41:46.630629   22116 start.go:316] joinCluster: &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:41:46.630750   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 10:41:46.630773   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:41:46.633565   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:41:46.634052   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:41:46.634081   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:41:46.634237   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:41:46.634395   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:41:46.634547   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:41:46.634706   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:41:46.798210   22116 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:41:46.798262   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6795am.2royruoiinmyx7oy --discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-570850-m03 --control-plane --apiserver-advertise-address=192.168.39.240 --apiserver-bind-port=8443"
	I0520 10:42:09.766831   22116 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6795am.2royruoiinmyx7oy --discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-570850-m03 --control-plane --apiserver-advertise-address=192.168.39.240 --apiserver-bind-port=8443": (22.968489292s)
	I0520 10:42:09.766902   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 10:42:10.368895   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-570850-m03 minikube.k8s.io/updated_at=2024_05_20T10_42_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=ha-570850 minikube.k8s.io/primary=false
	I0520 10:42:10.482377   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-570850-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 10:42:10.583198   22116 start.go:318] duration metric: took 23.952566788s to joinCluster
	I0520 10:42:10.583301   22116 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:42:10.584552   22116 out.go:177] * Verifying Kubernetes components...
	I0520 10:42:10.583716   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:42:10.585786   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:42:10.843675   22116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:42:10.874072   22116 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:42:10.874303   22116 kapi.go:59] client config for ha-570850: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.crt", KeyFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key", CAFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 10:42:10.874372   22116 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.152:8443
	I0520 10:42:10.874558   22116 node_ready.go:35] waiting up to 6m0s for node "ha-570850-m03" to be "Ready" ...
	I0520 10:42:10.874670   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:10.874679   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:10.874686   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:10.874689   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:10.878086   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:11.374805   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:11.374828   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:11.374837   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:11.374842   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:11.378395   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:11.875438   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:11.875461   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:11.875471   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:11.875475   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:11.878572   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:12.375000   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:12.375020   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:12.375032   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:12.375037   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:12.378978   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:12.874936   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:12.874956   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:12.874963   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:12.874968   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:12.878153   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:12.878912   22116 node_ready.go:53] node "ha-570850-m03" has status "Ready":"False"
	I0520 10:42:13.374715   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:13.374737   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:13.374745   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:13.374749   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:13.378127   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:13.875403   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:13.875430   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:13.875441   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:13.875446   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:13.879182   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:14.375415   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:14.375437   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:14.375444   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:14.375450   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:14.379355   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:14.875106   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:14.875126   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:14.875134   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:14.875137   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:14.878697   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:14.879422   22116 node_ready.go:53] node "ha-570850-m03" has status "Ready":"False"
	I0520 10:42:15.375080   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:15.375101   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:15.375109   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:15.375113   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:15.378574   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:15.875287   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:15.875311   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:15.875322   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:15.875330   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:15.879125   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:16.375244   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:16.375264   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:16.375272   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:16.375274   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:16.379217   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:16.875344   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:16.875364   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:16.875372   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:16.875376   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:16.879395   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:16.879834   22116 node_ready.go:53] node "ha-570850-m03" has status "Ready":"False"
	I0520 10:42:17.375630   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:17.375658   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.375669   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.375676   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.379557   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:17.875436   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:17.875462   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.875473   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.875481   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.878707   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:17.879333   22116 node_ready.go:49] node "ha-570850-m03" has status "Ready":"True"
	I0520 10:42:17.879350   22116 node_ready.go:38] duration metric: took 7.004780472s for node "ha-570850-m03" to be "Ready" ...
	I0520 10:42:17.879359   22116 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:42:17.879407   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:42:17.879416   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.879423   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.879427   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.885402   22116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 10:42:17.891971   22116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jw77g" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.892046   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jw77g
	I0520 10:42:17.892057   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.892067   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.892075   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.894683   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:17.895475   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:17.895489   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.895496   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.895501   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.898059   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:17.898666   22116 pod_ready.go:92] pod "coredns-7db6d8ff4d-jw77g" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:17.898684   22116 pod_ready.go:81] duration metric: took 6.693315ms for pod "coredns-7db6d8ff4d-jw77g" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.898698   22116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xr965" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.898744   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xr965
	I0520 10:42:17.898751   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.898758   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.898763   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.900803   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:17.901502   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:17.901518   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.901526   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.901530   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.904147   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:17.904592   22116 pod_ready.go:92] pod "coredns-7db6d8ff4d-xr965" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:17.904608   22116 pod_ready.go:81] duration metric: took 5.89957ms for pod "coredns-7db6d8ff4d-xr965" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.904616   22116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.904660   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850
	I0520 10:42:17.904667   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.904674   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.904680   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.907493   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:17.908403   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:17.908417   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.908427   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.908435   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.910798   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:17.911306   22116 pod_ready.go:92] pod "etcd-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:17.911318   22116 pod_ready.go:81] duration metric: took 6.695204ms for pod "etcd-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.911328   22116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.911378   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:42:17.911385   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.911392   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.911398   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.914871   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:17.915706   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:17.915719   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.915725   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.915728   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.919680   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:17.920275   22116 pod_ready.go:92] pod "etcd-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:17.920292   22116 pod_ready.go:81] duration metric: took 8.952781ms for pod "etcd-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.920302   22116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:18.075661   22116 request.go:629] Waited for 155.260393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:18.075737   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:18.075744   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:18.075754   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:18.075758   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:18.082837   22116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 10:42:18.275867   22116 request.go:629] Waited for 192.36193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:18.275942   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:18.275948   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:18.275954   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:18.275959   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:18.278932   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:18.476476   22116 request.go:629] Waited for 55.217948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:18.476542   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:18.476549   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:18.476560   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:18.476567   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:18.480118   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:18.676488   22116 request.go:629] Waited for 195.356961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:18.676542   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:18.676547   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:18.676554   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:18.676559   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:18.680097   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:18.920954   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:18.920976   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:18.920987   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:18.920994   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:18.925668   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:42:19.076051   22116 request.go:629] Waited for 149.304361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:19.076134   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:19.076145   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:19.076169   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:19.076180   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:19.079433   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:19.421354   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:19.421371   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:19.421379   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:19.421383   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:19.424148   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:19.476066   22116 request.go:629] Waited for 51.205993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:19.476138   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:19.476143   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:19.476150   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:19.476154   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:19.478974   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:19.920652   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:19.920673   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:19.920680   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:19.920684   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:19.925428   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:42:19.926242   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:19.926254   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:19.926261   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:19.926266   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:19.929319   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:19.929956   22116 pod_ready.go:102] pod "etcd-ha-570850-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 10:42:20.421494   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:20.421515   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:20.421522   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:20.421526   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:20.424810   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:20.425540   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:20.425555   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:20.425564   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:20.425571   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:20.428248   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:20.920565   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:20.920595   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:20.920606   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:20.920613   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:20.924245   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:20.924763   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:20.924776   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:20.924784   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:20.924789   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:20.927558   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:21.420634   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:21.420653   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:21.420661   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:21.420665   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:21.423882   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:21.424893   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:21.424906   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:21.424913   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:21.424916   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:21.427316   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:21.427818   22116 pod_ready.go:92] pod "etcd-ha-570850-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:21.427835   22116 pod_ready.go:81] duration metric: took 3.507526021s for pod "etcd-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:21.427851   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:21.427904   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850
	I0520 10:42:21.427913   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:21.427919   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:21.427923   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:21.430361   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:21.476149   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:21.476175   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:21.476198   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:21.476204   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:21.479343   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:21.479822   22116 pod_ready.go:92] pod "kube-apiserver-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:21.479841   22116 pod_ready.go:81] duration metric: took 51.984111ms for pod "kube-apiserver-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:21.479851   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:21.676270   22116 request.go:629] Waited for 196.359456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850-m02
	I0520 10:42:21.676318   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850-m02
	I0520 10:42:21.676323   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:21.676330   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:21.676336   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:21.679708   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:21.876186   22116 request.go:629] Waited for 195.758263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:21.876239   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:21.876246   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:21.876255   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:21.876261   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:21.879504   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:21.880203   22116 pod_ready.go:92] pod "kube-apiserver-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:21.880221   22116 pod_ready.go:81] duration metric: took 400.36329ms for pod "kube-apiserver-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:21.880234   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:22.075888   22116 request.go:629] Waited for 195.580244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850-m03
	I0520 10:42:22.075950   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850-m03
	I0520 10:42:22.075955   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:22.075962   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:22.075967   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:22.079392   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:22.275529   22116 request.go:629] Waited for 195.269451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:22.275580   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:22.275584   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:22.275591   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:22.275599   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:22.279161   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:22.279938   22116 pod_ready.go:92] pod "kube-apiserver-ha-570850-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:22.279956   22116 pod_ready.go:81] duration metric: took 399.71616ms for pod "kube-apiserver-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:22.279965   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:22.476118   22116 request.go:629] Waited for 196.071975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850
	I0520 10:42:22.476169   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850
	I0520 10:42:22.476174   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:22.476181   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:22.476186   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:22.479789   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:22.676418   22116 request.go:629] Waited for 195.356393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:22.676485   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:22.676504   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:22.676511   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:22.676517   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:22.679997   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:22.680575   22116 pod_ready.go:92] pod "kube-controller-manager-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:22.680591   22116 pod_ready.go:81] duration metric: took 400.62041ms for pod "kube-controller-manager-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:22.680601   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:22.875715   22116 request.go:629] Waited for 195.028483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:42:22.875767   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:42:22.875772   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:22.875779   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:22.875784   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:22.879029   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:23.075963   22116 request.go:629] Waited for 196.246261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:23.076029   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:23.076035   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:23.076042   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:23.076048   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:23.079726   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:23.080269   22116 pod_ready.go:92] pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:23.080287   22116 pod_ready.go:81] duration metric: took 399.678793ms for pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:23.080300   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:23.276421   22116 request.go:629] Waited for 196.059923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m03
	I0520 10:42:23.276493   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m03
	I0520 10:42:23.276500   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:23.276510   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:23.276514   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:23.280619   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:42:23.476322   22116 request.go:629] Waited for 194.727822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:23.476487   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:23.476506   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:23.476526   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:23.476541   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:23.481814   22116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 10:42:23.482517   22116 pod_ready.go:92] pod "kube-controller-manager-ha-570850-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:23.482537   22116 pod_ready.go:81] duration metric: took 402.22974ms for pod "kube-controller-manager-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:23.482547   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6s5x7" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:23.675503   22116 request.go:629] Waited for 192.874541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6s5x7
	I0520 10:42:23.675578   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6s5x7
	I0520 10:42:23.675590   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:23.675602   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:23.675610   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:23.679052   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:23.876283   22116 request.go:629] Waited for 196.358646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:23.876345   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:23.876350   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:23.876357   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:23.876365   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:23.879811   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:23.880538   22116 pod_ready.go:92] pod "kube-proxy-6s5x7" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:23.880554   22116 pod_ready.go:81] duration metric: took 398.001639ms for pod "kube-proxy-6s5x7" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:23.880563   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dnhm6" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:24.075774   22116 request.go:629] Waited for 195.155417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnhm6
	I0520 10:42:24.075856   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnhm6
	I0520 10:42:24.075863   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:24.075877   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:24.075884   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:24.079417   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:24.276401   22116 request.go:629] Waited for 196.280633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:24.276487   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:24.276498   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:24.276508   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:24.276516   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:24.279947   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:24.280426   22116 pod_ready.go:92] pod "kube-proxy-dnhm6" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:24.280443   22116 pod_ready.go:81] duration metric: took 399.874659ms for pod "kube-proxy-dnhm6" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:24.280452   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n5std" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:24.475518   22116 request.go:629] Waited for 194.997317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n5std
	I0520 10:42:24.475585   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n5std
	I0520 10:42:24.475593   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:24.475604   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:24.475610   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:24.479214   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:24.675964   22116 request.go:629] Waited for 195.834355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:24.676046   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:24.676056   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:24.676068   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:24.676078   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:24.679964   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:24.680600   22116 pod_ready.go:92] pod "kube-proxy-n5std" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:24.680622   22116 pod_ready.go:81] duration metric: took 400.162798ms for pod "kube-proxy-n5std" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:24.680634   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:24.875566   22116 request.go:629] Waited for 194.840434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850
	I0520 10:42:24.875629   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850
	I0520 10:42:24.875637   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:24.875651   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:24.875660   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:24.879192   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:25.076297   22116 request.go:629] Waited for 196.375594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:25.076358   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:25.076363   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:25.076370   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:25.076375   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:25.080140   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:25.080965   22116 pod_ready.go:92] pod "kube-scheduler-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:25.080999   22116 pod_ready.go:81] duration metric: took 400.357085ms for pod "kube-scheduler-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:25.081014   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:25.276429   22116 request.go:629] Waited for 195.341783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850-m02
	I0520 10:42:25.276508   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850-m02
	I0520 10:42:25.276519   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:25.276529   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:25.276536   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:25.280412   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:25.475472   22116 request.go:629] Waited for 194.28044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:25.475547   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:25.475552   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:25.475559   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:25.475563   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:25.479286   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:25.480175   22116 pod_ready.go:92] pod "kube-scheduler-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:25.480216   22116 pod_ready.go:81] duration metric: took 399.186583ms for pod "kube-scheduler-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:25.480252   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:25.675498   22116 request.go:629] Waited for 195.101909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850-m03
	I0520 10:42:25.675549   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850-m03
	I0520 10:42:25.675555   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:25.675564   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:25.675573   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:25.679043   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:25.876224   22116 request.go:629] Waited for 196.274957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:25.876289   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:25.876297   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:25.876305   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:25.876310   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:25.879394   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:25.880015   22116 pod_ready.go:92] pod "kube-scheduler-ha-570850-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:25.880032   22116 pod_ready.go:81] duration metric: took 399.765902ms for pod "kube-scheduler-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:25.880046   22116 pod_ready.go:38] duration metric: took 8.000678783s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:42:25.880074   22116 api_server.go:52] waiting for apiserver process to appear ...
	I0520 10:42:25.880137   22116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:42:25.897431   22116 api_server.go:72] duration metric: took 15.314090229s to wait for apiserver process to appear ...
	I0520 10:42:25.897451   22116 api_server.go:88] waiting for apiserver healthz status ...
	I0520 10:42:25.897480   22116 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0520 10:42:25.903203   22116 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I0520 10:42:25.903263   22116 round_trippers.go:463] GET https://192.168.39.152:8443/version
	I0520 10:42:25.903273   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:25.903284   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:25.903292   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:25.904216   22116 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0520 10:42:25.904273   22116 api_server.go:141] control plane version: v1.30.1
	I0520 10:42:25.904293   22116 api_server.go:131] duration metric: took 6.834697ms to wait for apiserver health ...
	I0520 10:42:25.904304   22116 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 10:42:26.075691   22116 request.go:629] Waited for 171.30407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:42:26.075784   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:42:26.075792   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:26.075808   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:26.075818   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:26.081925   22116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 10:42:26.089411   22116 system_pods.go:59] 24 kube-system pods found
	I0520 10:42:26.089447   22116 system_pods.go:61] "coredns-7db6d8ff4d-jw77g" [85653365-561a-4097-bda9-77121fd12b00] Running
	I0520 10:42:26.089453   22116 system_pods.go:61] "coredns-7db6d8ff4d-xr965" [e23a9ea0-3c6a-405a-9e09-d814ca8adeb0] Running
	I0520 10:42:26.089459   22116 system_pods.go:61] "etcd-ha-570850" [552d2142-ba50-4203-a6a0-4825c7cd9a5f] Running
	I0520 10:42:26.089463   22116 system_pods.go:61] "etcd-ha-570850-m02" [67bcc993-83b3-4331-86fe-dcc527947352] Running
	I0520 10:42:26.089467   22116 system_pods.go:61] "etcd-ha-570850-m03" [76f81192-b8f1-4c54-addc-c7d53b97d786] Running
	I0520 10:42:26.089472   22116 system_pods.go:61] "kindnet-2zphj" [2ddea332-9ab2-445d-a26e-148ad8cf9a77] Running
	I0520 10:42:26.089477   22116 system_pods.go:61] "kindnet-6tkd5" [44572e22-dff4-4d00-b3ed-0e3e4b9a3b6c] Running
	I0520 10:42:26.089483   22116 system_pods.go:61] "kindnet-f5x4s" [7e4a78b0-a069-4013-a80e-6b0fee4c4f8e] Running
	I0520 10:42:26.089489   22116 system_pods.go:61] "kube-apiserver-ha-570850" [1ac0940a-3739-40ee-80f0-77b95690c907] Running
	I0520 10:42:26.089496   22116 system_pods.go:61] "kube-apiserver-ha-570850-m02" [8aa06b87-752a-427f-b2b8-504f609e3151] Running
	I0520 10:42:26.089502   22116 system_pods.go:61] "kube-apiserver-ha-570850-m03" [114f6328-9773-418b-b09f-f8ac2c250750] Running
	I0520 10:42:26.089511   22116 system_pods.go:61] "kube-controller-manager-ha-570850" [b5e7e18e-92bf-4566-8cc0-4333ee2d12aa] Running
	I0520 10:42:26.089516   22116 system_pods.go:61] "kube-controller-manager-ha-570850-m02" [796d4ac0-bf9d-431e-8c1d-f31dde806499] Running
	I0520 10:42:26.089522   22116 system_pods.go:61] "kube-controller-manager-ha-570850-m03" [f8a61190-3598-4db2-8c63-726463c84b8e] Running
	I0520 10:42:26.089530   22116 system_pods.go:61] "kube-proxy-6s5x7" [424c7748-985a-407a-b778-d57c97016c68] Running
	I0520 10:42:26.089535   22116 system_pods.go:61] "kube-proxy-dnhm6" [3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1] Running
	I0520 10:42:26.089541   22116 system_pods.go:61] "kube-proxy-n5std" [226b707e-c7ba-45c6-814e-a50a2619ea3b] Running
	I0520 10:42:26.089546   22116 system_pods.go:61] "kube-scheduler-ha-570850" [0049e4d9-37d1-41ad-aad5-dbdc12c1af01] Running
	I0520 10:42:26.089551   22116 system_pods.go:61] "kube-scheduler-ha-570850-m02" [b2302685-93c6-477b-b0c5-d3cdb9c6fbc9] Running
	I0520 10:42:26.089557   22116 system_pods.go:61] "kube-scheduler-ha-570850-m03" [803f0fcb-7bfb-420e-b033-991d63d08650] Running
	I0520 10:42:26.089562   22116 system_pods.go:61] "kube-vip-ha-570850" [f230fe6d-0d6e-4a3b-bbf8-694d256f1fde] Running
	I0520 10:42:26.089567   22116 system_pods.go:61] "kube-vip-ha-570850-m02" [07f5a1d4-f793-4417-bec6-5d9eab498365] Running
	I0520 10:42:26.089572   22116 system_pods.go:61] "kube-vip-ha-570850-m03" [2367fb03-581a-4f3b-b9c1-63ace73a3924] Running
	I0520 10:42:26.089578   22116 system_pods.go:61] "storage-provisioner" [8cbdeaca-631d-449f-bb63-9ee4ec3f7a36] Running
	I0520 10:42:26.089585   22116 system_pods.go:74] duration metric: took 185.274444ms to wait for pod list to return data ...
	I0520 10:42:26.089602   22116 default_sa.go:34] waiting for default service account to be created ...
	I0520 10:42:26.276065   22116 request.go:629] Waited for 186.377447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I0520 10:42:26.276153   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I0520 10:42:26.276164   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:26.276174   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:26.276181   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:26.279283   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:26.279425   22116 default_sa.go:45] found service account: "default"
	I0520 10:42:26.279446   22116 default_sa.go:55] duration metric: took 189.837111ms for default service account to be created ...
	I0520 10:42:26.279457   22116 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 10:42:26.475661   22116 request.go:629] Waited for 196.140142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:42:26.475756   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:42:26.475769   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:26.475781   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:26.475787   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:26.485226   22116 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0520 10:42:26.492589   22116 system_pods.go:86] 24 kube-system pods found
	I0520 10:42:26.492625   22116 system_pods.go:89] "coredns-7db6d8ff4d-jw77g" [85653365-561a-4097-bda9-77121fd12b00] Running
	I0520 10:42:26.492640   22116 system_pods.go:89] "coredns-7db6d8ff4d-xr965" [e23a9ea0-3c6a-405a-9e09-d814ca8adeb0] Running
	I0520 10:42:26.492647   22116 system_pods.go:89] "etcd-ha-570850" [552d2142-ba50-4203-a6a0-4825c7cd9a5f] Running
	I0520 10:42:26.492653   22116 system_pods.go:89] "etcd-ha-570850-m02" [67bcc993-83b3-4331-86fe-dcc527947352] Running
	I0520 10:42:26.492678   22116 system_pods.go:89] "etcd-ha-570850-m03" [76f81192-b8f1-4c54-addc-c7d53b97d786] Running
	I0520 10:42:26.492684   22116 system_pods.go:89] "kindnet-2zphj" [2ddea332-9ab2-445d-a26e-148ad8cf9a77] Running
	I0520 10:42:26.492696   22116 system_pods.go:89] "kindnet-6tkd5" [44572e22-dff4-4d00-b3ed-0e3e4b9a3b6c] Running
	I0520 10:42:26.492702   22116 system_pods.go:89] "kindnet-f5x4s" [7e4a78b0-a069-4013-a80e-6b0fee4c4f8e] Running
	I0520 10:42:26.492708   22116 system_pods.go:89] "kube-apiserver-ha-570850" [1ac0940a-3739-40ee-80f0-77b95690c907] Running
	I0520 10:42:26.492714   22116 system_pods.go:89] "kube-apiserver-ha-570850-m02" [8aa06b87-752a-427f-b2b8-504f609e3151] Running
	I0520 10:42:26.492720   22116 system_pods.go:89] "kube-apiserver-ha-570850-m03" [114f6328-9773-418b-b09f-f8ac2c250750] Running
	I0520 10:42:26.492732   22116 system_pods.go:89] "kube-controller-manager-ha-570850" [b5e7e18e-92bf-4566-8cc0-4333ee2d12aa] Running
	I0520 10:42:26.492738   22116 system_pods.go:89] "kube-controller-manager-ha-570850-m02" [796d4ac0-bf9d-431e-8c1d-f31dde806499] Running
	I0520 10:42:26.492745   22116 system_pods.go:89] "kube-controller-manager-ha-570850-m03" [f8a61190-3598-4db2-8c63-726463c84b8e] Running
	I0520 10:42:26.492751   22116 system_pods.go:89] "kube-proxy-6s5x7" [424c7748-985a-407a-b778-d57c97016c68] Running
	I0520 10:42:26.492757   22116 system_pods.go:89] "kube-proxy-dnhm6" [3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1] Running
	I0520 10:42:26.492767   22116 system_pods.go:89] "kube-proxy-n5std" [226b707e-c7ba-45c6-814e-a50a2619ea3b] Running
	I0520 10:42:26.492773   22116 system_pods.go:89] "kube-scheduler-ha-570850" [0049e4d9-37d1-41ad-aad5-dbdc12c1af01] Running
	I0520 10:42:26.492779   22116 system_pods.go:89] "kube-scheduler-ha-570850-m02" [b2302685-93c6-477b-b0c5-d3cdb9c6fbc9] Running
	I0520 10:42:26.492785   22116 system_pods.go:89] "kube-scheduler-ha-570850-m03" [803f0fcb-7bfb-420e-b033-991d63d08650] Running
	I0520 10:42:26.492792   22116 system_pods.go:89] "kube-vip-ha-570850" [f230fe6d-0d6e-4a3b-bbf8-694d256f1fde] Running
	I0520 10:42:26.492801   22116 system_pods.go:89] "kube-vip-ha-570850-m02" [07f5a1d4-f793-4417-bec6-5d9eab498365] Running
	I0520 10:42:26.492808   22116 system_pods.go:89] "kube-vip-ha-570850-m03" [2367fb03-581a-4f3b-b9c1-63ace73a3924] Running
	I0520 10:42:26.492813   22116 system_pods.go:89] "storage-provisioner" [8cbdeaca-631d-449f-bb63-9ee4ec3f7a36] Running
	I0520 10:42:26.492821   22116 system_pods.go:126] duration metric: took 213.358057ms to wait for k8s-apps to be running ...
	I0520 10:42:26.492841   22116 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 10:42:26.492897   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:42:26.509575   22116 system_svc.go:56] duration metric: took 16.732603ms WaitForService to wait for kubelet
	I0520 10:42:26.509612   22116 kubeadm.go:576] duration metric: took 15.926273867s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:42:26.509649   22116 node_conditions.go:102] verifying NodePressure condition ...
	I0520 10:42:26.676488   22116 request.go:629] Waited for 166.752601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I0520 10:42:26.676543   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes
	I0520 10:42:26.676548   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:26.676555   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:26.676561   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:26.680442   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:26.681450   22116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 10:42:26.681483   22116 node_conditions.go:123] node cpu capacity is 2
	I0520 10:42:26.681513   22116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 10:42:26.681519   22116 node_conditions.go:123] node cpu capacity is 2
	I0520 10:42:26.681525   22116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 10:42:26.681534   22116 node_conditions.go:123] node cpu capacity is 2
	I0520 10:42:26.681540   22116 node_conditions.go:105] duration metric: took 171.884501ms to run NodePressure ...
	I0520 10:42:26.681554   22116 start.go:240] waiting for startup goroutines ...
	I0520 10:42:26.681579   22116 start.go:254] writing updated cluster config ...
	I0520 10:42:26.681875   22116 ssh_runner.go:195] Run: rm -f paused
	I0520 10:42:26.733767   22116 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 10:42:26.735135   22116 out.go:177] * Done! kubectl is now configured to use "ha-570850" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.757155394Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716201959757130078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=834f62aa-c583-42ce-be57-abd4342eb64e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.757985415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c53590f-5ac4-4ca2-9f6c-cf4349fc32fa name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.758040038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c53590f-5ac4-4ca2-9f6c-cf4349fc32fa name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.758505945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716201751816839075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dfd2933bae47652d85545b3c50f13118d2b9b80e0d182e41b38e5e1fab415ee,PodSandboxId:f700be37dc7a1d68f57eb08616e09cba5cb0bbf5d9f6ac08734620e3207a2d76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716201549339729413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548128143135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548090934743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6
a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141,PodSandboxId:f62741f19bc3771ba99dc80f66de33213cdb68695cdc53def1cee95ba5ab4060,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171620154
6646739550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716201546520866633,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5497e7ab0c98629fad1acf76af134ab8564cfc623f5403e153cfa0fb0f8a2853,PodSandboxId:c9c86ebe9b6430011dbd401a7e401b7cd6094128450d6805c2c9976c9d1d1ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716201529346241235,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aa10689e272e459ef8442761a741fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716201526502165946,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716201526516945329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68,PodSandboxId:57ea16243b58bca77d2577b5bcd0c198202ae885abd76ae34565650091fd59ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716201526475253816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab,PodSandboxId:6f83c307ff91489300a8e5c21aaaaebc6a9947b6427a186e299c9c2f42a47bd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716201526464338615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c53590f-5ac4-4ca2-9f6c-cf4349fc32fa name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.803007501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b302b556-56b0-496f-aa9b-8613325026d2 name=/runtime.v1.RuntimeService/Version
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.803098125Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b302b556-56b0-496f-aa9b-8613325026d2 name=/runtime.v1.RuntimeService/Version
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.804115526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=907d0382-33c4-4708-a362-0c97c7daa74a name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.804564987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716201959804542097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=907d0382-33c4-4708-a362-0c97c7daa74a name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.805255109Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d067d69a-c362-4c1f-a9e2-9d694bb5cd34 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.805306752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d067d69a-c362-4c1f-a9e2-9d694bb5cd34 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.805587737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716201751816839075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dfd2933bae47652d85545b3c50f13118d2b9b80e0d182e41b38e5e1fab415ee,PodSandboxId:f700be37dc7a1d68f57eb08616e09cba5cb0bbf5d9f6ac08734620e3207a2d76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716201549339729413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548128143135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548090934743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6
a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141,PodSandboxId:f62741f19bc3771ba99dc80f66de33213cdb68695cdc53def1cee95ba5ab4060,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171620154
6646739550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716201546520866633,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5497e7ab0c98629fad1acf76af134ab8564cfc623f5403e153cfa0fb0f8a2853,PodSandboxId:c9c86ebe9b6430011dbd401a7e401b7cd6094128450d6805c2c9976c9d1d1ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716201529346241235,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aa10689e272e459ef8442761a741fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716201526502165946,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716201526516945329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68,PodSandboxId:57ea16243b58bca77d2577b5bcd0c198202ae885abd76ae34565650091fd59ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716201526475253816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab,PodSandboxId:6f83c307ff91489300a8e5c21aaaaebc6a9947b6427a186e299c9c2f42a47bd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716201526464338615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d067d69a-c362-4c1f-a9e2-9d694bb5cd34 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.849815300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82d268a0-0199-48e3-b7b5-6daac76f5e70 name=/runtime.v1.RuntimeService/Version
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.849904148Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82d268a0-0199-48e3-b7b5-6daac76f5e70 name=/runtime.v1.RuntimeService/Version
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.850973269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50267696-e502-4c89-b8b0-2304ba1a8b20 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.851396349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716201959851372798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50267696-e502-4c89-b8b0-2304ba1a8b20 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.852070923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ffe5808-558c-49a0-8df9-5f46752b9277 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.852137498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ffe5808-558c-49a0-8df9-5f46752b9277 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.852387725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716201751816839075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dfd2933bae47652d85545b3c50f13118d2b9b80e0d182e41b38e5e1fab415ee,PodSandboxId:f700be37dc7a1d68f57eb08616e09cba5cb0bbf5d9f6ac08734620e3207a2d76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716201549339729413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548128143135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548090934743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6
a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141,PodSandboxId:f62741f19bc3771ba99dc80f66de33213cdb68695cdc53def1cee95ba5ab4060,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171620154
6646739550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716201546520866633,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5497e7ab0c98629fad1acf76af134ab8564cfc623f5403e153cfa0fb0f8a2853,PodSandboxId:c9c86ebe9b6430011dbd401a7e401b7cd6094128450d6805c2c9976c9d1d1ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716201529346241235,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aa10689e272e459ef8442761a741fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716201526502165946,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716201526516945329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68,PodSandboxId:57ea16243b58bca77d2577b5bcd0c198202ae885abd76ae34565650091fd59ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716201526475253816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab,PodSandboxId:6f83c307ff91489300a8e5c21aaaaebc6a9947b6427a186e299c9c2f42a47bd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716201526464338615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ffe5808-558c-49a0-8df9-5f46752b9277 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.887711946Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=488f24bc-9b55-4b7f-bf60-46b6d312b747 name=/runtime.v1.RuntimeService/Version
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.887803248Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=488f24bc-9b55-4b7f-bf60-46b6d312b747 name=/runtime.v1.RuntimeService/Version
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.888884217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1abb61c9-ed39-4e19-a771-e7ac41cb0135 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.889357131Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716201959889333641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1abb61c9-ed39-4e19-a771-e7ac41cb0135 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.890065760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b69d339d-b9f0-47c6-93ef-f02f702819c4 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.890124321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b69d339d-b9f0-47c6-93ef-f02f702819c4 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:45:59 ha-570850 crio[684]: time="2024-05-20 10:45:59.890943707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716201751816839075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dfd2933bae47652d85545b3c50f13118d2b9b80e0d182e41b38e5e1fab415ee,PodSandboxId:f700be37dc7a1d68f57eb08616e09cba5cb0bbf5d9f6ac08734620e3207a2d76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716201549339729413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548128143135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548090934743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6
a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141,PodSandboxId:f62741f19bc3771ba99dc80f66de33213cdb68695cdc53def1cee95ba5ab4060,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171620154
6646739550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716201546520866633,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5497e7ab0c98629fad1acf76af134ab8564cfc623f5403e153cfa0fb0f8a2853,PodSandboxId:c9c86ebe9b6430011dbd401a7e401b7cd6094128450d6805c2c9976c9d1d1ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716201529346241235,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aa10689e272e459ef8442761a741fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716201526502165946,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716201526516945329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68,PodSandboxId:57ea16243b58bca77d2577b5bcd0c198202ae885abd76ae34565650091fd59ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716201526475253816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab,PodSandboxId:6f83c307ff91489300a8e5c21aaaaebc6a9947b6427a186e299c9c2f42a47bd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716201526464338615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b69d339d-b9f0-47c6-93ef-f02f702819c4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ff013665ebf9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   1b009be6473f8       busybox-fc5497c4f-q9r5v
	5dfd2933bae47       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   f700be37dc7a1       storage-provisioner
	90d27af32d37b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   7f39567bf239f       coredns-7db6d8ff4d-jw77g
	db09b659f2af2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   5862a2beb755c       coredns-7db6d8ff4d-xr965
	501666a770a6d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   f62741f19bc37       kindnet-2zphj
	2ab644050dd47       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      6 minutes ago       Running             kube-proxy                0                   17d9c116389c0       kube-proxy-dnhm6
	5497e7ab0c986       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   c9c86ebe9b643       kube-vip-ha-570850
	3ed532de1dd4e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   6e5a2b4300b4d       etcd-ha-570850
	5f81e882715f6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago       Running             kube-scheduler            0                   760d0e6343aaf       kube-scheduler-ha-570850
	43a07bf3d1ba6       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago       Running             kube-controller-manager   0                   57ea16243b58b       kube-controller-manager-ha-570850
	25ac8a29b1819       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago       Running             kube-apiserver            0                   6f83c307ff914       kube-apiserver-ha-570850
	
	
	==> coredns [90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d] <==
	[INFO] 10.244.0.4:37188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125121s
	[INFO] 10.244.0.4:45557 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011117s
	[INFO] 10.244.1.2:55262 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001714739s
	[INFO] 10.244.1.2:48903 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120367s
	[INFO] 10.244.1.2:57868 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001671078s
	[INFO] 10.244.1.2:56995 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082609s
	[INFO] 10.244.2.2:58942 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706357s
	[INFO] 10.244.2.2:35126 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090874s
	[INFO] 10.244.2.2:37909 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008474s
	[INFO] 10.244.2.2:51462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166312s
	[INFO] 10.244.2.2:33520 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093626s
	[INFO] 10.244.0.4:59833 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088933s
	[INFO] 10.244.1.2:59628 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093196s
	[INFO] 10.244.1.2:50071 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124886s
	[INFO] 10.244.1.2:42327 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143875s
	[INFO] 10.244.0.4:35557 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115628s
	[INFO] 10.244.0.4:36441 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148453s
	[INFO] 10.244.0.4:52951 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110826s
	[INFO] 10.244.0.4:41170 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075418s
	[INFO] 10.244.1.2:54236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124156s
	[INFO] 10.244.1.2:55716 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008044s
	[INFO] 10.244.1.2:51672 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128329s
	[INFO] 10.244.2.2:44501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195618s
	[INFO] 10.244.2.2:48083 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105613s
	[INFO] 10.244.2.2:41793 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131302s
	
	
	==> coredns [db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c] <==
	[INFO] 10.244.1.2:55952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176159s
	[INFO] 10.244.1.2:40232 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000152961s
	[INFO] 10.244.1.2:39961 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001702696s
	[INFO] 10.244.2.2:55319 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001364377s
	[INFO] 10.244.2.2:57122 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001355331s
	[INFO] 10.244.0.4:59866 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156011s
	[INFO] 10.244.0.4:56885 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009797641s
	[INFO] 10.244.0.4:37828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138434s
	[INFO] 10.244.1.2:59957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100136s
	[INFO] 10.244.1.2:41914 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180125s
	[INFO] 10.244.1.2:38547 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077887s
	[INFO] 10.244.1.2:35980 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172936s
	[INFO] 10.244.2.2:50204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019701s
	[INFO] 10.244.2.2:42765 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076194s
	[INFO] 10.244.2.2:44033 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001282105s
	[INFO] 10.244.0.4:49359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121025s
	[INFO] 10.244.0.4:41717 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077487s
	[INFO] 10.244.0.4:44192 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062315s
	[INFO] 10.244.1.2:40081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176555s
	[INFO] 10.244.2.2:58201 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150016s
	[INFO] 10.244.2.2:56308 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145342s
	[INFO] 10.244.2.2:36926 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093685s
	[INFO] 10.244.2.2:42968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009681s
	[INFO] 10.244.1.2:34643 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154857s
	[INFO] 10.244.2.2:49012 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281845s
	
	
	==> describe nodes <==
	Name:               ha-570850
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T10_38_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:38:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:45:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:42:57 +0000   Mon, 20 May 2024 10:38:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:42:57 +0000   Mon, 20 May 2024 10:38:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:42:57 +0000   Mon, 20 May 2024 10:38:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:42:57 +0000   Mon, 20 May 2024 10:39:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-570850
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 328136f6dc1d4de39599ea377c28404e
	  System UUID:                328136f6-dc1d-4de3-9599-ea377c28404e
	  Boot ID:                    bc8aa620-b931-4636-9be0-cbe2c643d897
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q9r5v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 coredns-7db6d8ff4d-jw77g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m55s
	  kube-system                 coredns-7db6d8ff4d-xr965             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m55s
	  kube-system                 etcd-ha-570850                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m8s
	  kube-system                 kindnet-2zphj                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m55s
	  kube-system                 kube-apiserver-ha-570850             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-controller-manager-ha-570850    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-proxy-dnhm6                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-scheduler-ha-570850             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-vip-ha-570850                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m53s  kube-proxy       
	  Normal  Starting                 7m8s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m8s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m8s   kubelet          Node ha-570850 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m8s   kubelet          Node ha-570850 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m8s   kubelet          Node ha-570850 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m55s  node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Normal  NodeReady                6m53s  kubelet          Node ha-570850 status is now: NodeReady
	  Normal  RegisteredNode           4m43s  node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Normal  RegisteredNode           3m35s  node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	
	
	Name:               ha-570850-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T10_41_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:40:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:43:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 10:43:01 +0000   Mon, 20 May 2024 10:44:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 10:43:01 +0000   Mon, 20 May 2024 10:44:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 10:43:01 +0000   Mon, 20 May 2024 10:44:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 10:43:01 +0000   Mon, 20 May 2024 10:44:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-570850-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 679a56af7395441fb77f57941486fb9b
	  System UUID:                679a56af-7395-441f-b77f-57941486fb9b
	  Boot ID:                    354f7a3a-30e0-4c1e-b310-a2f305dc251d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wpv4q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 etcd-ha-570850-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m59s
	  kube-system                 kindnet-f5x4s                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m1s
	  kube-system                 kube-apiserver-ha-570850-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-controller-manager-ha-570850-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-6s5x7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-scheduler-ha-570850-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-vip-ha-570850-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m1s (x8 over 5m1s)  kubelet          Node ha-570850-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m1s (x8 over 5m1s)  kubelet          Node ha-570850-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m1s (x7 over 5m1s)  kubelet          Node ha-570850-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m                   node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  RegisteredNode           4m43s                node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  RegisteredNode           3m35s                node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  NodeNotReady             105s                 node-controller  Node ha-570850-m02 status is now: NodeNotReady
	
	
	Name:               ha-570850-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T10_42_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:42:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:45:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:42:37 +0000   Mon, 20 May 2024 10:42:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:42:37 +0000   Mon, 20 May 2024 10:42:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:42:37 +0000   Mon, 20 May 2024 10:42:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:42:37 +0000   Mon, 20 May 2024 10:42:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    ha-570850-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dbb5ce07c2d4d77b22b999a94184ed3
	  System UUID:                1dbb5ce0-7c2d-4d77-b22b-999a94184ed3
	  Boot ID:                    170869d7-c32e-447c-b434-e7c7bb44441c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-x2wtn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 etcd-ha-570850-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m52s
	  kube-system                 kindnet-6tkd5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-apiserver-ha-570850-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-controller-manager-ha-570850-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-proxy-n5std                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-scheduler-ha-570850-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-vip-ha-570850-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m54s)  kubelet          Node ha-570850-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x8 over 3m54s)  kubelet          Node ha-570850-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m54s)  kubelet          Node ha-570850-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-570850-m03 event: Registered Node ha-570850-m03 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-570850-m03 event: Registered Node ha-570850-m03 in Controller
	  Normal  RegisteredNode           3m35s                  node-controller  Node ha-570850-m03 event: Registered Node ha-570850-m03 in Controller
	
	
	Name:               ha-570850-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T10_43_09_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:43:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:45:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:43:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:43:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:43:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:43:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    ha-570850-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14f2f8926106479fbb09b362a7c92c83
	  System UUID:                14f2f892-6106-479f-bb09-b362a7c92c83
	  Boot ID:                    2b747312-6091-4a44-bf3d-81557c80cae1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-p9s2j       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m52s
	  kube-system                 kube-proxy-xt8ht    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s (x2 over 2m52s)  kubelet          Node ha-570850-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x2 over 2m52s)  kubelet          Node ha-570850-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x2 over 2m52s)  kubelet          Node ha-570850-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-570850-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May20 10:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051736] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039898] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.514141] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.438828] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.611927] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.905313] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.066522] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063920] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.182694] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.123506] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.262222] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.187463] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +4.199064] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.059504] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.313134] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.088050] kauditd_printk_skb: 79 callbacks suppressed
	[May20 10:39] kauditd_printk_skb: 21 callbacks suppressed
	[May20 10:41] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d] <==
	{"level":"warn","ts":"2024-05-20T10:45:59.999862Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c7f63ab15238d140","error":"Get \"https://192.168.39.111:2380/version\": dial tcp 192.168.39.111:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-20T10:46:00.092431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.135326Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c7f63ab15238d140","rtt":"952.16µs","error":"dial tcp 192.168.39.111:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-20T10:46:00.135422Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c7f63ab15238d140","rtt":"10.137085ms","error":"dial tcp 192.168.39.111:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-20T10:46:00.192028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.20047Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.222934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.234112Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.247179Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.262985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.275935Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.293562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.294175Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.29947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.30284Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.312711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.319624Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.327031Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.331697Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.334577Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.340335Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.345681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.350722Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.358724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:00.392996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:46:00 up 7 min,  0 users,  load average: 0.12, 0.25, 0.13
	Linux ha-570850 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141] <==
	I0520 10:45:27.972970       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	I0520 10:45:37.984615       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0520 10:45:37.984728       1 main.go:227] handling current node
	I0520 10:45:37.984745       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0520 10:45:37.984755       1 main.go:250] Node ha-570850-m02 has CIDR [10.244.1.0/24] 
	I0520 10:45:37.984904       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0520 10:45:37.984938       1 main.go:250] Node ha-570850-m03 has CIDR [10.244.2.0/24] 
	I0520 10:45:37.985037       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0520 10:45:37.985069       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	I0520 10:45:48.058243       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0520 10:45:48.058338       1 main.go:227] handling current node
	I0520 10:45:48.058364       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0520 10:45:48.058394       1 main.go:250] Node ha-570850-m02 has CIDR [10.244.1.0/24] 
	I0520 10:45:48.058759       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0520 10:45:48.059115       1 main.go:250] Node ha-570850-m03 has CIDR [10.244.2.0/24] 
	I0520 10:45:48.059220       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0520 10:45:48.059241       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	I0520 10:45:58.064939       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0520 10:45:58.064981       1 main.go:227] handling current node
	I0520 10:45:58.064992       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0520 10:45:58.064998       1 main.go:250] Node ha-570850-m02 has CIDR [10.244.1.0/24] 
	I0520 10:45:58.065102       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0520 10:45:58.065127       1 main.go:250] Node ha-570850-m03 has CIDR [10.244.2.0/24] 
	I0520 10:45:58.065171       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0520 10:45:58.065175       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab] <==
	W0520 10:38:51.432066       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.152]
	I0520 10:38:51.433008       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 10:38:51.439563       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 10:38:51.655574       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 10:38:52.713561       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 10:38:52.731317       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 10:38:52.892821       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 10:39:05.711017       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0520 10:39:05.810742       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0520 10:42:33.706486       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46536: use of closed network connection
	E0520 10:42:33.884526       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46554: use of closed network connection
	E0520 10:42:34.065757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46574: use of closed network connection
	E0520 10:42:34.274096       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46582: use of closed network connection
	E0520 10:42:34.446296       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46606: use of closed network connection
	E0520 10:42:34.637920       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46614: use of closed network connection
	E0520 10:42:34.841021       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46626: use of closed network connection
	E0520 10:42:35.030344       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46638: use of closed network connection
	E0520 10:42:35.210788       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46654: use of closed network connection
	E0520 10:42:35.492296       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46680: use of closed network connection
	E0520 10:42:35.660249       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46704: use of closed network connection
	E0520 10:42:35.842898       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46730: use of closed network connection
	E0520 10:42:36.014217       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46746: use of closed network connection
	E0520 10:42:36.196515       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46774: use of closed network connection
	E0520 10:42:36.369345       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46796: use of closed network connection
	W0520 10:43:51.442538       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.152 192.168.39.240]
	
	
	==> kube-controller-manager [43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68] <==
	I0520 10:42:28.089329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.918µs"
	I0520 10:42:28.192949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.939µs"
	I0520 10:42:29.201586       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.828µs"
	I0520 10:42:29.220422       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.313µs"
	I0520 10:42:29.225611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.858µs"
	I0520 10:42:29.242441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.319µs"
	I0520 10:42:29.249621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.523µs"
	I0520 10:42:29.258271       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.953µs"
	I0520 10:42:29.554527       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.429µs"
	I0520 10:42:29.577101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.905µs"
	I0520 10:42:31.038899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.294168ms"
	I0520 10:42:31.039181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.495µs"
	I0520 10:42:32.816575       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.341211ms"
	I0520 10:42:32.816951       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.198µs"
	I0520 10:42:33.228261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.864906ms"
	I0520 10:42:33.228553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.234µs"
	E0520 10:43:08.861852       1 certificate_controller.go:146] Sync csr-qzlrc failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-qzlrc": the object has been modified; please apply your changes to the latest version and try again
	E0520 10:43:08.873271       1 certificate_controller.go:146] Sync csr-qzlrc failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-qzlrc": the object has been modified; please apply your changes to the latest version and try again
	I0520 10:43:08.962752       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-570850-m04\" does not exist"
	I0520 10:43:08.976348       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-570850-m04" podCIDRs=["10.244.3.0/24"]
	I0520 10:43:10.170326       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-570850-m04"
	I0520 10:43:19.317412       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-570850-m04"
	I0520 10:44:15.211531       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-570850-m04"
	I0520 10:44:15.376965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.38039ms"
	I0520 10:44:15.377067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.13µs"
	
	
	==> kube-proxy [2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0] <==
	I0520 10:39:06.784564       1 server_linux.go:69] "Using iptables proxy"
	I0520 10:39:06.815937       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.152"]
	I0520 10:39:06.917409       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 10:39:06.917466       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 10:39:06.917483       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:39:06.922471       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:39:06.922869       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:39:06.922935       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:39:06.924245       1 config.go:192] "Starting service config controller"
	I0520 10:39:06.924292       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:39:06.924321       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:39:06.924324       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:39:06.925272       1 config.go:319] "Starting node config controller"
	I0520 10:39:06.925313       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:39:07.024717       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 10:39:07.024779       1 shared_informer.go:320] Caches are synced for service config
	I0520 10:39:07.025449       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6] <==
	E0520 10:38:50.885186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0520 10:38:52.984939       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 10:42:06.982847       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7q2gn\": pod kube-proxy-7q2gn is already assigned to node \"ha-570850-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7q2gn" node="ha-570850-m03"
	E0520 10:42:06.983037       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f5355064-ed25-4347-963d-911b76bd30aa(kube-system/kube-proxy-7q2gn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-7q2gn"
	E0520 10:42:06.983065       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7q2gn\": pod kube-proxy-7q2gn is already assigned to node \"ha-570850-m03\"" pod="kube-system/kube-proxy-7q2gn"
	I0520 10:42:06.983108       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7q2gn" node="ha-570850-m03"
	E0520 10:43:09.036606       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xt8ht\": pod kube-proxy-xt8ht is already assigned to node \"ha-570850-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xt8ht" node="ha-570850-m04"
	E0520 10:43:09.036861       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xt8ht\": pod kube-proxy-xt8ht is already assigned to node \"ha-570850-m04\"" pod="kube-system/kube-proxy-xt8ht"
	I0520 10:43:09.036997       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xt8ht" node="ha-570850-m04"
	E0520 10:43:09.037530       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p9s2j\": pod kindnet-p9s2j is already assigned to node \"ha-570850-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-p9s2j" node="ha-570850-m04"
	E0520 10:43:09.037595       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1fe31478-4f62-41d2-b54c-f6c77522f50b(kube-system/kindnet-p9s2j) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-p9s2j"
	E0520 10:43:09.037619       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p9s2j\": pod kindnet-p9s2j is already assigned to node \"ha-570850-m04\"" pod="kube-system/kindnet-p9s2j"
	I0520 10:43:09.042099       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p9s2j" node="ha-570850-m04"
	E0520 10:43:09.115820       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zm8dl\": pod kube-proxy-zm8dl is already assigned to node \"ha-570850-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zm8dl" node="ha-570850-m04"
	E0520 10:43:09.115912       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f29aece2-7321-46cf-b65e-845a3194350a(kube-system/kube-proxy-zm8dl) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zm8dl"
	E0520 10:43:09.115934       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zm8dl\": pod kube-proxy-zm8dl is already assigned to node \"ha-570850-m04\"" pod="kube-system/kube-proxy-zm8dl"
	I0520 10:43:09.115965       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zm8dl" node="ha-570850-m04"
	E0520 10:43:09.240073       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-88gxd\": pod kindnet-88gxd is already assigned to node \"ha-570850-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-88gxd" node="ha-570850-m04"
	E0520 10:43:09.240374       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b586d037-5383-4b68-890e-d703bbd704ee(kube-system/kindnet-88gxd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-88gxd"
	E0520 10:43:09.240455       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-88gxd\": pod kindnet-88gxd is already assigned to node \"ha-570850-m04\"" pod="kube-system/kindnet-88gxd"
	I0520 10:43:09.240546       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-88gxd" node="ha-570850-m04"
	E0520 10:43:09.240746       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-kt46j\": pod kube-proxy-kt46j is already assigned to node \"ha-570850-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-kt46j" node="ha-570850-m04"
	E0520 10:43:09.243239       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4188c495-b918-43ea-9526-73e8061e8f78(kube-system/kube-proxy-kt46j) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-kt46j"
	E0520 10:43:09.243341       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kt46j\": pod kube-proxy-kt46j is already assigned to node \"ha-570850-m04\"" pod="kube-system/kube-proxy-kt46j"
	I0520 10:43:09.243593       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kt46j" node="ha-570850-m04"
	
	
	==> kubelet <==
	May 20 10:42:27 ha-570850 kubelet[1371]: E0520 10:42:27.668201    1371 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-570850" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-570850' and this object
	May 20 10:42:27 ha-570850 kubelet[1371]: I0520 10:42:27.696990    1371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r27s\" (UniqueName: \"kubernetes.io/projected/4d01afa5-94fb-411d-9823-db40ab44750a-kube-api-access-5r27s\") pod \"busybox-fc5497c4f-q9r5v\" (UID: \"4d01afa5-94fb-411d-9823-db40ab44750a\") " pod="default/busybox-fc5497c4f-q9r5v"
	May 20 10:42:28 ha-570850 kubelet[1371]: E0520 10:42:28.853244    1371 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	May 20 10:42:28 ha-570850 kubelet[1371]: E0520 10:42:28.853556    1371 projected.go:200] Error preparing data for projected volume kube-api-access-5r27s for pod default/busybox-fc5497c4f-q9r5v: failed to sync configmap cache: timed out waiting for the condition
	May 20 10:42:28 ha-570850 kubelet[1371]: E0520 10:42:28.853873    1371 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4d01afa5-94fb-411d-9823-db40ab44750a-kube-api-access-5r27s podName:4d01afa5-94fb-411d-9823-db40ab44750a nodeName:}" failed. No retries permitted until 2024-05-20 10:42:29.353775692 +0000 UTC m=+216.664995069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5r27s" (UniqueName: "kubernetes.io/projected/4d01afa5-94fb-411d-9823-db40ab44750a-kube-api-access-5r27s") pod "busybox-fc5497c4f-q9r5v" (UID: "4d01afa5-94fb-411d-9823-db40ab44750a") : failed to sync configmap cache: timed out waiting for the condition
	May 20 10:42:52 ha-570850 kubelet[1371]: E0520 10:42:52.853616    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:42:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:42:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:42:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:42:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:43:52 ha-570850 kubelet[1371]: E0520 10:43:52.872500    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:43:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:43:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:43:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:43:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:44:52 ha-570850 kubelet[1371]: E0520 10:44:52.860965    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:44:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:44:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:44:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:44:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:45:52 ha-570850 kubelet[1371]: E0520 10:45:52.854700    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:45:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:45:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:45:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:45:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-570850 -n ha-570850
helpers_test.go:261: (dbg) Run:  kubectl --context ha-570850 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr: exit status 3 (3.184684908s)

                                                
                                                
-- stdout --
	ha-570850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-570850-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:46:04.927587   27064 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:46:04.927700   27064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:04.927706   27064 out.go:304] Setting ErrFile to fd 2...
	I0520 10:46:04.927711   27064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:04.927926   27064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:46:04.928090   27064 out.go:298] Setting JSON to false
	I0520 10:46:04.928116   27064 mustload.go:65] Loading cluster: ha-570850
	I0520 10:46:04.928162   27064 notify.go:220] Checking for updates...
	I0520 10:46:04.928686   27064 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:46:04.928709   27064 status.go:255] checking status of ha-570850 ...
	I0520 10:46:04.929215   27064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:04.929265   27064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:04.948242   27064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36215
	I0520 10:46:04.948665   27064 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:04.949304   27064 main.go:141] libmachine: Using API Version  1
	I0520 10:46:04.949327   27064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:04.949672   27064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:04.949889   27064 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:46:04.951427   27064 status.go:330] ha-570850 host status = "Running" (err=<nil>)
	I0520 10:46:04.951441   27064 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:04.951743   27064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:04.951782   27064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:04.966007   27064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0520 10:46:04.966375   27064 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:04.966788   27064 main.go:141] libmachine: Using API Version  1
	I0520 10:46:04.966806   27064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:04.967151   27064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:04.967314   27064 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:46:04.970154   27064 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:04.970593   27064 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:04.970632   27064 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:04.970738   27064 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:04.971048   27064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:04.971122   27064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:04.985411   27064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46841
	I0520 10:46:04.985800   27064 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:04.986250   27064 main.go:141] libmachine: Using API Version  1
	I0520 10:46:04.986268   27064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:04.986636   27064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:04.986822   27064 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:46:04.987037   27064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:04.987056   27064 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:46:04.989865   27064 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:04.990303   27064 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:04.990327   27064 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:04.990450   27064 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:46:04.990595   27064 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:46:04.990763   27064 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:46:04.990901   27064 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:46:05.075112   27064 ssh_runner.go:195] Run: systemctl --version
	I0520 10:46:05.081500   27064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:05.097168   27064 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:05.097198   27064 api_server.go:166] Checking apiserver status ...
	I0520 10:46:05.097227   27064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:05.113413   27064 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0520 10:46:05.123193   27064 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:05.123240   27064 ssh_runner.go:195] Run: ls
	I0520 10:46:05.127649   27064 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:05.131622   27064 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:05.131642   27064 status.go:422] ha-570850 apiserver status = Running (err=<nil>)
	I0520 10:46:05.131651   27064 status.go:257] ha-570850 status: &{Name:ha-570850 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:05.131666   27064 status.go:255] checking status of ha-570850-m02 ...
	I0520 10:46:05.131945   27064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:05.131975   27064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:05.146369   27064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0520 10:46:05.146761   27064 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:05.147253   27064 main.go:141] libmachine: Using API Version  1
	I0520 10:46:05.147277   27064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:05.147598   27064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:05.147789   27064 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:46:05.149307   27064 status.go:330] ha-570850-m02 host status = "Running" (err=<nil>)
	I0520 10:46:05.149324   27064 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:46:05.149696   27064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:05.149736   27064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:05.165037   27064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0520 10:46:05.165453   27064 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:05.165901   27064 main.go:141] libmachine: Using API Version  1
	I0520 10:46:05.165923   27064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:05.166207   27064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:05.166369   27064 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:46:05.168972   27064 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:05.169396   27064 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:46:05.169423   27064 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:05.169595   27064 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:46:05.169944   27064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:05.169981   27064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:05.184571   27064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41373
	I0520 10:46:05.185001   27064 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:05.185471   27064 main.go:141] libmachine: Using API Version  1
	I0520 10:46:05.185492   27064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:05.185757   27064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:05.185941   27064 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:46:05.186127   27064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:05.186150   27064 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:46:05.188780   27064 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:05.189193   27064 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:46:05.189218   27064 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:05.189364   27064 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:46:05.189538   27064 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:46:05.189645   27064 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:46:05.189735   27064 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	W0520 10:46:07.725238   27064 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0520 10:46:07.725314   27064 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0520 10:46:07.725327   27064 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:07.725334   27064 status.go:257] ha-570850-m02 status: &{Name:ha-570850-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 10:46:07.725351   27064 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:07.725358   27064 status.go:255] checking status of ha-570850-m03 ...
	I0520 10:46:07.725640   27064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:07.725687   27064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:07.740662   27064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
	I0520 10:46:07.741069   27064 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:07.741477   27064 main.go:141] libmachine: Using API Version  1
	I0520 10:46:07.741505   27064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:07.741782   27064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:07.742028   27064 main.go:141] libmachine: (ha-570850-m03) Calling .GetState
	I0520 10:46:07.743544   27064 status.go:330] ha-570850-m03 host status = "Running" (err=<nil>)
	I0520 10:46:07.743561   27064 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:07.743871   27064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:07.743903   27064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:07.758917   27064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38709
	I0520 10:46:07.759259   27064 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:07.759708   27064 main.go:141] libmachine: Using API Version  1
	I0520 10:46:07.759741   27064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:07.760037   27064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:07.760215   27064 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:46:07.762824   27064 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:07.763187   27064 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:07.763214   27064 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:07.763298   27064 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:07.763585   27064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:07.763638   27064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:07.777968   27064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I0520 10:46:07.778359   27064 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:07.778714   27064 main.go:141] libmachine: Using API Version  1
	I0520 10:46:07.778733   27064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:07.779004   27064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:07.779186   27064 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:46:07.779358   27064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:07.779378   27064 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:46:07.781880   27064 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:07.782243   27064 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:07.782265   27064 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:07.782380   27064 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:46:07.782580   27064 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:46:07.782766   27064 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:46:07.782913   27064 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:46:07.868561   27064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:07.884848   27064 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:07.884875   27064 api_server.go:166] Checking apiserver status ...
	I0520 10:46:07.884908   27064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:07.898972   27064 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0520 10:46:07.912101   27064 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:07.912144   27064 ssh_runner.go:195] Run: ls
	I0520 10:46:07.916606   27064 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:07.922161   27064 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:07.922184   27064 status.go:422] ha-570850-m03 apiserver status = Running (err=<nil>)
	I0520 10:46:07.922194   27064 status.go:257] ha-570850-m03 status: &{Name:ha-570850-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:07.922207   27064 status.go:255] checking status of ha-570850-m04 ...
	I0520 10:46:07.922518   27064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:07.922552   27064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:07.937133   27064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I0520 10:46:07.937502   27064 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:07.937979   27064 main.go:141] libmachine: Using API Version  1
	I0520 10:46:07.938001   27064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:07.938306   27064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:07.938501   27064 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:46:07.939814   27064 status.go:330] ha-570850-m04 host status = "Running" (err=<nil>)
	I0520 10:46:07.939830   27064 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:07.940082   27064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:07.940112   27064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:07.954281   27064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41673
	I0520 10:46:07.954684   27064 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:07.955199   27064 main.go:141] libmachine: Using API Version  1
	I0520 10:46:07.955224   27064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:07.955613   27064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:07.955790   27064 main.go:141] libmachine: (ha-570850-m04) Calling .GetIP
	I0520 10:46:07.958736   27064 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:07.959206   27064 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:07.959231   27064 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:07.959380   27064 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:07.959785   27064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:07.959823   27064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:07.974480   27064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36773
	I0520 10:46:07.974932   27064 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:07.975390   27064 main.go:141] libmachine: Using API Version  1
	I0520 10:46:07.975412   27064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:07.975716   27064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:07.975929   27064 main.go:141] libmachine: (ha-570850-m04) Calling .DriverName
	I0520 10:46:07.976095   27064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:07.976111   27064 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHHostname
	I0520 10:46:07.978511   27064 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:07.978876   27064 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:07.978908   27064 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:07.979264   27064 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHPort
	I0520 10:46:07.979472   27064 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHKeyPath
	I0520 10:46:07.979616   27064 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHUsername
	I0520 10:46:07.979738   27064 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m04/id_rsa Username:docker}
	I0520 10:46:08.056717   27064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:08.071151   27064 status.go:257] ha-570850-m04 status: &{Name:ha-570850-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr: exit status 3 (5.189288332s)

                                                
                                                
-- stdout --
	ha-570850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-570850-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:46:09.079207   27163 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:46:09.079454   27163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:09.079465   27163 out.go:304] Setting ErrFile to fd 2...
	I0520 10:46:09.079471   27163 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:09.079647   27163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:46:09.079826   27163 out.go:298] Setting JSON to false
	I0520 10:46:09.079856   27163 mustload.go:65] Loading cluster: ha-570850
	I0520 10:46:09.079968   27163 notify.go:220] Checking for updates...
	I0520 10:46:09.080264   27163 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:46:09.080281   27163 status.go:255] checking status of ha-570850 ...
	I0520 10:46:09.080628   27163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:09.080702   27163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:09.099224   27163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35271
	I0520 10:46:09.099718   27163 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:09.100241   27163 main.go:141] libmachine: Using API Version  1
	I0520 10:46:09.100262   27163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:09.100663   27163 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:09.100890   27163 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:46:09.102504   27163 status.go:330] ha-570850 host status = "Running" (err=<nil>)
	I0520 10:46:09.102518   27163 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:09.102812   27163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:09.102847   27163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:09.118476   27163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0520 10:46:09.118849   27163 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:09.119325   27163 main.go:141] libmachine: Using API Version  1
	I0520 10:46:09.119346   27163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:09.119628   27163 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:09.119797   27163 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:46:09.122401   27163 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:09.122810   27163 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:09.122830   27163 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:09.123011   27163 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:09.123404   27163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:09.123449   27163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:09.138483   27163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33439
	I0520 10:46:09.138897   27163 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:09.139352   27163 main.go:141] libmachine: Using API Version  1
	I0520 10:46:09.139380   27163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:09.139758   27163 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:09.140010   27163 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:46:09.140236   27163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:09.140267   27163 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:46:09.142967   27163 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:09.143395   27163 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:09.143429   27163 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:09.143568   27163 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:46:09.143758   27163 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:46:09.143939   27163 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:46:09.144046   27163 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:46:09.224621   27163 ssh_runner.go:195] Run: systemctl --version
	I0520 10:46:09.230552   27163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:09.244733   27163 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:09.244771   27163 api_server.go:166] Checking apiserver status ...
	I0520 10:46:09.244809   27163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:09.259390   27163 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0520 10:46:09.269816   27163 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:09.269862   27163 ssh_runner.go:195] Run: ls
	I0520 10:46:09.275087   27163 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:09.280987   27163 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:09.281009   27163 status.go:422] ha-570850 apiserver status = Running (err=<nil>)
	I0520 10:46:09.281021   27163 status.go:257] ha-570850 status: &{Name:ha-570850 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:09.281042   27163 status.go:255] checking status of ha-570850-m02 ...
	I0520 10:46:09.281322   27163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:09.281354   27163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:09.296331   27163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
	I0520 10:46:09.296747   27163 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:09.297228   27163 main.go:141] libmachine: Using API Version  1
	I0520 10:46:09.297249   27163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:09.297511   27163 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:09.297671   27163 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:46:09.299227   27163 status.go:330] ha-570850-m02 host status = "Running" (err=<nil>)
	I0520 10:46:09.299245   27163 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:46:09.299518   27163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:09.299559   27163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:09.314542   27163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0520 10:46:09.314992   27163 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:09.315430   27163 main.go:141] libmachine: Using API Version  1
	I0520 10:46:09.315449   27163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:09.315745   27163 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:09.315897   27163 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:46:09.318664   27163 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:09.319075   27163 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:46:09.319097   27163 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:09.319191   27163 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:46:09.319519   27163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:09.319552   27163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:09.334198   27163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0520 10:46:09.334605   27163 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:09.335073   27163 main.go:141] libmachine: Using API Version  1
	I0520 10:46:09.335092   27163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:09.335416   27163 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:09.335603   27163 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:46:09.335777   27163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:09.335799   27163 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:46:09.338407   27163 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:09.338819   27163 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:46:09.338844   27163 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:09.338964   27163 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:46:09.339119   27163 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:46:09.339269   27163 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:46:09.339413   27163 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	W0520 10:46:10.797159   27163 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:10.797205   27163 retry.go:31] will retry after 321.754464ms: dial tcp 192.168.39.111:22: connect: no route to host
	W0520 10:46:13.869279   27163 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0520 10:46:13.869369   27163 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0520 10:46:13.869383   27163 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:13.869391   27163 status.go:257] ha-570850-m02 status: &{Name:ha-570850-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 10:46:13.869434   27163 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:13.869445   27163 status.go:255] checking status of ha-570850-m03 ...
	I0520 10:46:13.869873   27163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:13.869934   27163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:13.884655   27163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0520 10:46:13.885078   27163 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:13.885542   27163 main.go:141] libmachine: Using API Version  1
	I0520 10:46:13.885565   27163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:13.885844   27163 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:13.886033   27163 main.go:141] libmachine: (ha-570850-m03) Calling .GetState
	I0520 10:46:13.887505   27163 status.go:330] ha-570850-m03 host status = "Running" (err=<nil>)
	I0520 10:46:13.887522   27163 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:13.887814   27163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:13.887848   27163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:13.902869   27163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
	I0520 10:46:13.903290   27163 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:13.903729   27163 main.go:141] libmachine: Using API Version  1
	I0520 10:46:13.903748   27163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:13.904107   27163 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:13.904338   27163 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:46:13.907203   27163 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:13.907620   27163 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:13.907648   27163 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:13.907781   27163 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:13.908167   27163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:13.908221   27163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:13.922875   27163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I0520 10:46:13.923309   27163 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:13.923754   27163 main.go:141] libmachine: Using API Version  1
	I0520 10:46:13.923775   27163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:13.924075   27163 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:13.924260   27163 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:46:13.924423   27163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:13.924439   27163 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:46:13.927191   27163 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:13.927659   27163 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:13.927680   27163 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:13.927890   27163 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:46:13.928057   27163 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:46:13.928180   27163 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:46:13.928335   27163 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:46:14.014105   27163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:14.035047   27163 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:14.035074   27163 api_server.go:166] Checking apiserver status ...
	I0520 10:46:14.035110   27163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:14.052451   27163 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0520 10:46:14.064035   27163 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:14.064102   27163 ssh_runner.go:195] Run: ls
	I0520 10:46:14.068733   27163 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:14.075027   27163 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:14.075049   27163 status.go:422] ha-570850-m03 apiserver status = Running (err=<nil>)
	I0520 10:46:14.075059   27163 status.go:257] ha-570850-m03 status: &{Name:ha-570850-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:14.075077   27163 status.go:255] checking status of ha-570850-m04 ...
	I0520 10:46:14.075411   27163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:14.075452   27163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:14.091015   27163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I0520 10:46:14.091448   27163 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:14.091873   27163 main.go:141] libmachine: Using API Version  1
	I0520 10:46:14.091891   27163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:14.092205   27163 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:14.092411   27163 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:46:14.093823   27163 status.go:330] ha-570850-m04 host status = "Running" (err=<nil>)
	I0520 10:46:14.093835   27163 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:14.094132   27163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:14.094169   27163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:14.109298   27163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36735
	I0520 10:46:14.109719   27163 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:14.110156   27163 main.go:141] libmachine: Using API Version  1
	I0520 10:46:14.110177   27163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:14.110464   27163 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:14.110678   27163 main.go:141] libmachine: (ha-570850-m04) Calling .GetIP
	I0520 10:46:14.113252   27163 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:14.113692   27163 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:14.113712   27163 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:14.113852   27163 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:14.114120   27163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:14.114150   27163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:14.129457   27163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0520 10:46:14.129847   27163 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:14.130283   27163 main.go:141] libmachine: Using API Version  1
	I0520 10:46:14.130303   27163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:14.130618   27163 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:14.130803   27163 main.go:141] libmachine: (ha-570850-m04) Calling .DriverName
	I0520 10:46:14.130985   27163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:14.131007   27163 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHHostname
	I0520 10:46:14.133647   27163 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:14.134058   27163 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:14.134087   27163 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:14.134225   27163 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHPort
	I0520 10:46:14.134378   27163 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHKeyPath
	I0520 10:46:14.134557   27163 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHUsername
	I0520 10:46:14.134715   27163 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m04/id_rsa Username:docker}
	I0520 10:46:14.213408   27163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:14.229079   27163 status.go:257] ha-570850-m04 status: &{Name:ha-570850-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr: exit status 3 (4.599835014s)

                                                
                                                
-- stdout --
	ha-570850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-570850-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:46:16.054227   27263 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:46:16.054462   27263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:16.054471   27263 out.go:304] Setting ErrFile to fd 2...
	I0520 10:46:16.054476   27263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:16.054677   27263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:46:16.054833   27263 out.go:298] Setting JSON to false
	I0520 10:46:16.054856   27263 mustload.go:65] Loading cluster: ha-570850
	I0520 10:46:16.054894   27263 notify.go:220] Checking for updates...
	I0520 10:46:16.055247   27263 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:46:16.055266   27263 status.go:255] checking status of ha-570850 ...
	I0520 10:46:16.055694   27263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:16.055748   27263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:16.073997   27263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46047
	I0520 10:46:16.074375   27263 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:16.074922   27263 main.go:141] libmachine: Using API Version  1
	I0520 10:46:16.074947   27263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:16.075285   27263 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:16.075498   27263 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:46:16.077696   27263 status.go:330] ha-570850 host status = "Running" (err=<nil>)
	I0520 10:46:16.077719   27263 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:16.078128   27263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:16.078172   27263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:16.093858   27263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34859
	I0520 10:46:16.094276   27263 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:16.094735   27263 main.go:141] libmachine: Using API Version  1
	I0520 10:46:16.094766   27263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:16.095047   27263 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:16.095241   27263 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:46:16.097935   27263 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:16.098353   27263 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:16.098379   27263 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:16.098505   27263 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:16.098845   27263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:16.098886   27263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:16.113762   27263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36273
	I0520 10:46:16.114140   27263 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:16.114567   27263 main.go:141] libmachine: Using API Version  1
	I0520 10:46:16.114590   27263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:16.114922   27263 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:16.115126   27263 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:46:16.115328   27263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:16.115359   27263 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:46:16.118154   27263 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:16.118566   27263 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:16.118634   27263 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:16.118721   27263 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:46:16.118880   27263 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:46:16.119021   27263 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:46:16.119196   27263 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:46:16.201876   27263 ssh_runner.go:195] Run: systemctl --version
	I0520 10:46:16.208829   27263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:16.224946   27263 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:16.224990   27263 api_server.go:166] Checking apiserver status ...
	I0520 10:46:16.225022   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:16.239704   27263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0520 10:46:16.251135   27263 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:16.251199   27263 ssh_runner.go:195] Run: ls
	I0520 10:46:16.256082   27263 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:16.260671   27263 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:16.260689   27263 status.go:422] ha-570850 apiserver status = Running (err=<nil>)
	I0520 10:46:16.260702   27263 status.go:257] ha-570850 status: &{Name:ha-570850 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:16.260723   27263 status.go:255] checking status of ha-570850-m02 ...
	I0520 10:46:16.261134   27263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:16.261177   27263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:16.275834   27263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46513
	I0520 10:46:16.276264   27263 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:16.276780   27263 main.go:141] libmachine: Using API Version  1
	I0520 10:46:16.276811   27263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:16.277162   27263 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:16.277330   27263 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:46:16.278959   27263 status.go:330] ha-570850-m02 host status = "Running" (err=<nil>)
	I0520 10:46:16.278973   27263 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:46:16.279320   27263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:16.279373   27263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:16.293791   27263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40395
	I0520 10:46:16.294190   27263 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:16.294634   27263 main.go:141] libmachine: Using API Version  1
	I0520 10:46:16.294655   27263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:16.294951   27263 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:16.295164   27263 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:46:16.297743   27263 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:16.298149   27263 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:46:16.298181   27263 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:16.298298   27263 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:46:16.298584   27263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:16.298616   27263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:16.314367   27263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0520 10:46:16.314755   27263 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:16.315199   27263 main.go:141] libmachine: Using API Version  1
	I0520 10:46:16.315219   27263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:16.315526   27263 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:16.315745   27263 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:46:16.315929   27263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:16.315947   27263 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:46:16.318640   27263 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:16.319062   27263 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:46:16.319088   27263 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:16.319218   27263 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:46:16.319420   27263 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:46:16.319538   27263 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:46:16.319649   27263 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	W0520 10:46:16.945229   27263 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:16.945289   27263 retry.go:31] will retry after 270.716227ms: dial tcp 192.168.39.111:22: connect: no route to host
	W0520 10:46:20.269163   27263 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0520 10:46:20.269248   27263 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0520 10:46:20.269295   27263 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:20.269307   27263 status.go:257] ha-570850-m02 status: &{Name:ha-570850-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 10:46:20.269336   27263 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:20.269348   27263 status.go:255] checking status of ha-570850-m03 ...
	I0520 10:46:20.269646   27263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:20.269702   27263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:20.284567   27263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40247
	I0520 10:46:20.284975   27263 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:20.285500   27263 main.go:141] libmachine: Using API Version  1
	I0520 10:46:20.285527   27263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:20.285850   27263 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:20.286042   27263 main.go:141] libmachine: (ha-570850-m03) Calling .GetState
	I0520 10:46:20.287568   27263 status.go:330] ha-570850-m03 host status = "Running" (err=<nil>)
	I0520 10:46:20.287588   27263 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:20.287887   27263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:20.287920   27263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:20.302151   27263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I0520 10:46:20.302539   27263 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:20.302946   27263 main.go:141] libmachine: Using API Version  1
	I0520 10:46:20.302964   27263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:20.303242   27263 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:20.303417   27263 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:46:20.305938   27263 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:20.306329   27263 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:20.306358   27263 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:20.306494   27263 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:20.306793   27263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:20.306846   27263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:20.321012   27263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40733
	I0520 10:46:20.321415   27263 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:20.321924   27263 main.go:141] libmachine: Using API Version  1
	I0520 10:46:20.321946   27263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:20.322233   27263 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:20.322426   27263 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:46:20.322597   27263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:20.322616   27263 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:46:20.325393   27263 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:20.325792   27263 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:20.325826   27263 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:20.325941   27263 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:46:20.326137   27263 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:46:20.326272   27263 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:46:20.326404   27263 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:46:20.408496   27263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:20.425775   27263 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:20.425806   27263 api_server.go:166] Checking apiserver status ...
	I0520 10:46:20.425855   27263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:20.440533   27263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0520 10:46:20.450738   27263 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:20.450841   27263 ssh_runner.go:195] Run: ls
	I0520 10:46:20.455523   27263 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:20.459814   27263 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:20.459844   27263 status.go:422] ha-570850-m03 apiserver status = Running (err=<nil>)
	I0520 10:46:20.459856   27263 status.go:257] ha-570850-m03 status: &{Name:ha-570850-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:20.459876   27263 status.go:255] checking status of ha-570850-m04 ...
	I0520 10:46:20.460251   27263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:20.460290   27263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:20.476355   27263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40901
	I0520 10:46:20.476761   27263 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:20.477239   27263 main.go:141] libmachine: Using API Version  1
	I0520 10:46:20.477263   27263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:20.477524   27263 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:20.477692   27263 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:46:20.479186   27263 status.go:330] ha-570850-m04 host status = "Running" (err=<nil>)
	I0520 10:46:20.479203   27263 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:20.479510   27263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:20.479549   27263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:20.494583   27263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0520 10:46:20.495045   27263 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:20.495486   27263 main.go:141] libmachine: Using API Version  1
	I0520 10:46:20.495505   27263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:20.495849   27263 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:20.496052   27263 main.go:141] libmachine: (ha-570850-m04) Calling .GetIP
	I0520 10:46:20.498634   27263 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:20.499038   27263 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:20.499066   27263 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:20.499181   27263 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:20.499458   27263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:20.499491   27263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:20.514018   27263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42481
	I0520 10:46:20.514429   27263 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:20.514922   27263 main.go:141] libmachine: Using API Version  1
	I0520 10:46:20.514944   27263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:20.515270   27263 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:20.515440   27263 main.go:141] libmachine: (ha-570850-m04) Calling .DriverName
	I0520 10:46:20.515629   27263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:20.515654   27263 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHHostname
	I0520 10:46:20.518327   27263 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:20.518753   27263 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:20.518779   27263 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:20.518920   27263 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHPort
	I0520 10:46:20.519075   27263 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHKeyPath
	I0520 10:46:20.519241   27263 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHUsername
	I0520 10:46:20.519378   27263 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m04/id_rsa Username:docker}
	I0520 10:46:20.597616   27263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:20.611485   27263 status.go:257] ha-570850-m04 status: &{Name:ha-570850-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr: exit status 3 (3.750838013s)

                                                
                                                
-- stdout --
	ha-570850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-570850-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:46:23.322028   27379 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:46:23.322126   27379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:23.322135   27379 out.go:304] Setting ErrFile to fd 2...
	I0520 10:46:23.322139   27379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:23.322323   27379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:46:23.322464   27379 out.go:298] Setting JSON to false
	I0520 10:46:23.322486   27379 mustload.go:65] Loading cluster: ha-570850
	I0520 10:46:23.322522   27379 notify.go:220] Checking for updates...
	I0520 10:46:23.322813   27379 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:46:23.322828   27379 status.go:255] checking status of ha-570850 ...
	I0520 10:46:23.323146   27379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:23.323206   27379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:23.341994   27379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0520 10:46:23.342516   27379 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:23.343122   27379 main.go:141] libmachine: Using API Version  1
	I0520 10:46:23.343142   27379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:23.343494   27379 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:23.343653   27379 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:46:23.345280   27379 status.go:330] ha-570850 host status = "Running" (err=<nil>)
	I0520 10:46:23.345305   27379 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:23.345707   27379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:23.345755   27379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:23.360792   27379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0520 10:46:23.361219   27379 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:23.361721   27379 main.go:141] libmachine: Using API Version  1
	I0520 10:46:23.361744   27379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:23.362028   27379 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:23.362236   27379 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:46:23.364678   27379 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:23.365155   27379 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:23.365173   27379 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:23.365337   27379 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:23.365668   27379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:23.365710   27379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:23.381277   27379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36129
	I0520 10:46:23.381706   27379 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:23.382223   27379 main.go:141] libmachine: Using API Version  1
	I0520 10:46:23.382251   27379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:23.382616   27379 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:23.382870   27379 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:46:23.383073   27379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:23.383107   27379 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:46:23.385938   27379 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:23.386299   27379 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:23.386320   27379 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:23.386449   27379 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:46:23.386637   27379 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:46:23.386799   27379 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:46:23.386972   27379 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:46:23.472749   27379 ssh_runner.go:195] Run: systemctl --version
	I0520 10:46:23.479043   27379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:23.494556   27379 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:23.494587   27379 api_server.go:166] Checking apiserver status ...
	I0520 10:46:23.494618   27379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:23.510097   27379 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0520 10:46:23.519583   27379 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:23.519633   27379 ssh_runner.go:195] Run: ls
	I0520 10:46:23.524184   27379 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:23.530439   27379 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:23.530465   27379 status.go:422] ha-570850 apiserver status = Running (err=<nil>)
	I0520 10:46:23.530479   27379 status.go:257] ha-570850 status: &{Name:ha-570850 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:23.530494   27379 status.go:255] checking status of ha-570850-m02 ...
	I0520 10:46:23.530871   27379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:23.530925   27379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:23.545894   27379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0520 10:46:23.546288   27379 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:23.546736   27379 main.go:141] libmachine: Using API Version  1
	I0520 10:46:23.546756   27379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:23.547068   27379 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:23.547290   27379 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:46:23.548847   27379 status.go:330] ha-570850-m02 host status = "Running" (err=<nil>)
	I0520 10:46:23.548868   27379 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:46:23.549286   27379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:23.549350   27379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:23.563705   27379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0520 10:46:23.564106   27379 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:23.564541   27379 main.go:141] libmachine: Using API Version  1
	I0520 10:46:23.564560   27379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:23.564848   27379 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:23.565069   27379 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:46:23.567549   27379 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:23.567933   27379 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:46:23.567957   27379 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:23.568095   27379 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:46:23.568369   27379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:23.568398   27379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:23.583941   27379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0520 10:46:23.584397   27379 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:23.584838   27379 main.go:141] libmachine: Using API Version  1
	I0520 10:46:23.584863   27379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:23.585197   27379 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:23.585370   27379 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:46:23.585571   27379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:23.585594   27379 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:46:23.588433   27379 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:23.589019   27379 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:46:23.589055   27379 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:23.589314   27379 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:46:23.589500   27379 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:46:23.589682   27379 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:46:23.589849   27379 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	W0520 10:46:26.669175   27379 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0520 10:46:26.669289   27379 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0520 10:46:26.669317   27379 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:26.669329   27379 status.go:257] ha-570850-m02 status: &{Name:ha-570850-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 10:46:26.669374   27379 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:26.669388   27379 status.go:255] checking status of ha-570850-m03 ...
	I0520 10:46:26.669711   27379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:26.669759   27379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:26.684176   27379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46607
	I0520 10:46:26.684620   27379 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:26.685121   27379 main.go:141] libmachine: Using API Version  1
	I0520 10:46:26.685141   27379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:26.685459   27379 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:26.685648   27379 main.go:141] libmachine: (ha-570850-m03) Calling .GetState
	I0520 10:46:26.687174   27379 status.go:330] ha-570850-m03 host status = "Running" (err=<nil>)
	I0520 10:46:26.687193   27379 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:26.687591   27379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:26.687633   27379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:26.701864   27379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35415
	I0520 10:46:26.702285   27379 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:26.702730   27379 main.go:141] libmachine: Using API Version  1
	I0520 10:46:26.702755   27379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:26.703080   27379 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:26.703262   27379 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:46:26.705922   27379 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:26.706327   27379 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:26.706355   27379 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:26.706474   27379 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:26.706760   27379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:26.706798   27379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:26.721620   27379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I0520 10:46:26.722135   27379 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:26.722636   27379 main.go:141] libmachine: Using API Version  1
	I0520 10:46:26.722672   27379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:26.722975   27379 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:26.723173   27379 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:46:26.723306   27379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:26.723322   27379 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:46:26.725997   27379 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:26.726390   27379 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:26.726421   27379 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:26.726533   27379 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:46:26.726696   27379 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:46:26.726800   27379 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:46:26.726939   27379 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:46:26.811126   27379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:26.830157   27379 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:26.830189   27379 api_server.go:166] Checking apiserver status ...
	I0520 10:46:26.830229   27379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:26.852064   27379 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0520 10:46:26.866521   27379 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:26.866585   27379 ssh_runner.go:195] Run: ls
	I0520 10:46:26.873597   27379 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:26.878891   27379 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:26.878924   27379 status.go:422] ha-570850-m03 apiserver status = Running (err=<nil>)
	I0520 10:46:26.878931   27379 status.go:257] ha-570850-m03 status: &{Name:ha-570850-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:26.878948   27379 status.go:255] checking status of ha-570850-m04 ...
	I0520 10:46:26.879285   27379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:26.879328   27379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:26.893792   27379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I0520 10:46:26.894154   27379 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:26.894727   27379 main.go:141] libmachine: Using API Version  1
	I0520 10:46:26.894756   27379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:26.895083   27379 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:26.895321   27379 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:46:26.897003   27379 status.go:330] ha-570850-m04 host status = "Running" (err=<nil>)
	I0520 10:46:26.897020   27379 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:26.897413   27379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:26.897481   27379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:26.913063   27379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42055
	I0520 10:46:26.913431   27379 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:26.913835   27379 main.go:141] libmachine: Using API Version  1
	I0520 10:46:26.913859   27379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:26.914138   27379 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:26.914328   27379 main.go:141] libmachine: (ha-570850-m04) Calling .GetIP
	I0520 10:46:26.916989   27379 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:26.917376   27379 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:26.917402   27379 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:26.917509   27379 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:26.917877   27379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:26.917914   27379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:26.933258   27379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41045
	I0520 10:46:26.933724   27379 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:26.934197   27379 main.go:141] libmachine: Using API Version  1
	I0520 10:46:26.934223   27379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:26.934554   27379 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:26.934741   27379 main.go:141] libmachine: (ha-570850-m04) Calling .DriverName
	I0520 10:46:26.935072   27379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:26.935098   27379 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHHostname
	I0520 10:46:26.937740   27379 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:26.938175   27379 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:26.938205   27379 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:26.938404   27379 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHPort
	I0520 10:46:26.938571   27379 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHKeyPath
	I0520 10:46:26.938714   27379 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHUsername
	I0520 10:46:26.938849   27379 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m04/id_rsa Username:docker}
	I0520 10:46:27.017498   27379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:27.032849   27379 status.go:257] ha-570850-m04 status: &{Name:ha-570850-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr: exit status 3 (3.712681644s)

                                                
                                                
-- stdout --
	ha-570850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-570850-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:46:29.719387   27480 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:46:29.719621   27480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:29.719630   27480 out.go:304] Setting ErrFile to fd 2...
	I0520 10:46:29.719634   27480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:29.719785   27480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:46:29.719964   27480 out.go:298] Setting JSON to false
	I0520 10:46:29.719988   27480 mustload.go:65] Loading cluster: ha-570850
	I0520 10:46:29.720094   27480 notify.go:220] Checking for updates...
	I0520 10:46:29.720337   27480 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:46:29.720350   27480 status.go:255] checking status of ha-570850 ...
	I0520 10:46:29.720695   27480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:29.720739   27480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:29.738989   27480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I0520 10:46:29.739533   27480 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:29.740176   27480 main.go:141] libmachine: Using API Version  1
	I0520 10:46:29.740201   27480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:29.740532   27480 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:29.740685   27480 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:46:29.742326   27480 status.go:330] ha-570850 host status = "Running" (err=<nil>)
	I0520 10:46:29.742344   27480 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:29.742624   27480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:29.742661   27480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:29.757015   27480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0520 10:46:29.757364   27480 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:29.757753   27480 main.go:141] libmachine: Using API Version  1
	I0520 10:46:29.757771   27480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:29.758028   27480 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:29.758189   27480 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:46:29.760812   27480 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:29.761210   27480 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:29.761246   27480 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:29.761341   27480 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:29.761654   27480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:29.761690   27480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:29.775951   27480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37919
	I0520 10:46:29.776398   27480 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:29.776850   27480 main.go:141] libmachine: Using API Version  1
	I0520 10:46:29.776871   27480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:29.777184   27480 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:29.777387   27480 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:46:29.777539   27480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:29.777564   27480 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:46:29.780164   27480 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:29.780531   27480 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:29.780559   27480 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:29.780712   27480 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:46:29.780891   27480 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:46:29.781085   27480 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:46:29.781250   27480 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:46:29.864756   27480 ssh_runner.go:195] Run: systemctl --version
	I0520 10:46:29.870917   27480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:29.885975   27480 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:29.886005   27480 api_server.go:166] Checking apiserver status ...
	I0520 10:46:29.886035   27480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:29.900219   27480 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0520 10:46:29.911950   27480 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:29.912025   27480 ssh_runner.go:195] Run: ls
	I0520 10:46:29.917245   27480 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:29.923066   27480 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:29.923087   27480 status.go:422] ha-570850 apiserver status = Running (err=<nil>)
	I0520 10:46:29.923097   27480 status.go:257] ha-570850 status: &{Name:ha-570850 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:29.923111   27480 status.go:255] checking status of ha-570850-m02 ...
	I0520 10:46:29.923395   27480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:29.923428   27480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:29.937971   27480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45023
	I0520 10:46:29.938418   27480 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:29.938882   27480 main.go:141] libmachine: Using API Version  1
	I0520 10:46:29.938907   27480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:29.939218   27480 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:29.939414   27480 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:46:29.940895   27480 status.go:330] ha-570850-m02 host status = "Running" (err=<nil>)
	I0520 10:46:29.940916   27480 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:46:29.941313   27480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:29.941349   27480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:29.956177   27480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33245
	I0520 10:46:29.956542   27480 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:29.957008   27480 main.go:141] libmachine: Using API Version  1
	I0520 10:46:29.957026   27480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:29.957329   27480 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:29.957508   27480 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:46:29.960188   27480 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:29.960594   27480 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:46:29.960622   27480 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:29.960755   27480 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:46:29.961114   27480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:29.961152   27480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:29.975837   27480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37145
	I0520 10:46:29.976261   27480 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:29.976683   27480 main.go:141] libmachine: Using API Version  1
	I0520 10:46:29.976697   27480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:29.977003   27480 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:29.977183   27480 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:46:29.977363   27480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:29.977381   27480 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:46:29.980086   27480 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:29.980502   27480 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:46:29.980532   27480 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:29.980638   27480 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:46:29.980806   27480 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:46:29.980979   27480 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:46:29.981134   27480 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	W0520 10:46:33.041156   27480 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0520 10:46:33.041251   27480 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0520 10:46:33.041264   27480 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:33.041271   27480 status.go:257] ha-570850-m02 status: &{Name:ha-570850-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 10:46:33.041286   27480 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:33.041303   27480 status.go:255] checking status of ha-570850-m03 ...
	I0520 10:46:33.041586   27480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:33.041624   27480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:33.056421   27480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39559
	I0520 10:46:33.056922   27480 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:33.057466   27480 main.go:141] libmachine: Using API Version  1
	I0520 10:46:33.057505   27480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:33.057835   27480 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:33.058033   27480 main.go:141] libmachine: (ha-570850-m03) Calling .GetState
	I0520 10:46:33.059513   27480 status.go:330] ha-570850-m03 host status = "Running" (err=<nil>)
	I0520 10:46:33.059527   27480 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:33.059783   27480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:33.059823   27480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:33.076486   27480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43201
	I0520 10:46:33.076911   27480 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:33.077404   27480 main.go:141] libmachine: Using API Version  1
	I0520 10:46:33.077431   27480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:33.077737   27480 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:33.077899   27480 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:46:33.080345   27480 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:33.080773   27480 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:33.080797   27480 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:33.080955   27480 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:33.081371   27480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:33.081414   27480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:33.096058   27480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38059
	I0520 10:46:33.096446   27480 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:33.096877   27480 main.go:141] libmachine: Using API Version  1
	I0520 10:46:33.096900   27480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:33.097206   27480 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:33.097423   27480 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:46:33.097610   27480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:33.097628   27480 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:46:33.100209   27480 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:33.100641   27480 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:33.100659   27480 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:33.100781   27480 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:46:33.100960   27480 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:46:33.101117   27480 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:46:33.101251   27480 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:46:33.184690   27480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:33.203332   27480 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:33.203357   27480 api_server.go:166] Checking apiserver status ...
	I0520 10:46:33.203386   27480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:33.218433   27480 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0520 10:46:33.229351   27480 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:33.229427   27480 ssh_runner.go:195] Run: ls
	I0520 10:46:33.234365   27480 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:33.240224   27480 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:33.240244   27480 status.go:422] ha-570850-m03 apiserver status = Running (err=<nil>)
	I0520 10:46:33.240251   27480 status.go:257] ha-570850-m03 status: &{Name:ha-570850-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:33.240265   27480 status.go:255] checking status of ha-570850-m04 ...
	I0520 10:46:33.240537   27480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:33.240568   27480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:33.255946   27480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34473
	I0520 10:46:33.256331   27480 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:33.256792   27480 main.go:141] libmachine: Using API Version  1
	I0520 10:46:33.256819   27480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:33.257222   27480 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:33.257388   27480 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:46:33.258972   27480 status.go:330] ha-570850-m04 host status = "Running" (err=<nil>)
	I0520 10:46:33.258990   27480 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:33.259284   27480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:33.259328   27480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:33.274002   27480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43271
	I0520 10:46:33.274429   27480 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:33.275043   27480 main.go:141] libmachine: Using API Version  1
	I0520 10:46:33.275075   27480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:33.275419   27480 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:33.275607   27480 main.go:141] libmachine: (ha-570850-m04) Calling .GetIP
	I0520 10:46:33.278275   27480 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:33.278664   27480 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:33.278688   27480 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:33.278854   27480 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:33.279144   27480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:33.279181   27480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:33.293362   27480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37301
	I0520 10:46:33.293725   27480 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:33.294231   27480 main.go:141] libmachine: Using API Version  1
	I0520 10:46:33.294257   27480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:33.294559   27480 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:33.294775   27480 main.go:141] libmachine: (ha-570850-m04) Calling .DriverName
	I0520 10:46:33.294965   27480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:33.294998   27480 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHHostname
	I0520 10:46:33.297396   27480 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:33.297784   27480 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:33.297809   27480 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:33.297914   27480 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHPort
	I0520 10:46:33.298078   27480 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHKeyPath
	I0520 10:46:33.298223   27480 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHUsername
	I0520 10:46:33.298360   27480 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m04/id_rsa Username:docker}
	I0520 10:46:33.376427   27480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:33.391945   27480 status.go:257] ha-570850-m04 status: &{Name:ha-570850-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr: exit status 3 (3.718726671s)

                                                
                                                
-- stdout --
	ha-570850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-570850-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:46:37.789249   27596 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:46:37.789375   27596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:37.789384   27596 out.go:304] Setting ErrFile to fd 2...
	I0520 10:46:37.789391   27596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:37.789611   27596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:46:37.789773   27596 out.go:298] Setting JSON to false
	I0520 10:46:37.789804   27596 mustload.go:65] Loading cluster: ha-570850
	I0520 10:46:37.789905   27596 notify.go:220] Checking for updates...
	I0520 10:46:37.790228   27596 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:46:37.790246   27596 status.go:255] checking status of ha-570850 ...
	I0520 10:46:37.790663   27596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:37.790714   27596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:37.806069   27596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45925
	I0520 10:46:37.806398   27596 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:37.806892   27596 main.go:141] libmachine: Using API Version  1
	I0520 10:46:37.806913   27596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:37.807223   27596 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:37.807404   27596 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:46:37.809100   27596 status.go:330] ha-570850 host status = "Running" (err=<nil>)
	I0520 10:46:37.809124   27596 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:37.809451   27596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:37.809496   27596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:37.825284   27596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37561
	I0520 10:46:37.825803   27596 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:37.826414   27596 main.go:141] libmachine: Using API Version  1
	I0520 10:46:37.826444   27596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:37.826844   27596 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:37.827193   27596 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:46:37.830710   27596 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:37.831237   27596 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:37.831262   27596 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:37.831397   27596 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:37.831688   27596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:37.831740   27596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:37.847156   27596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46149
	I0520 10:46:37.847542   27596 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:37.847947   27596 main.go:141] libmachine: Using API Version  1
	I0520 10:46:37.847966   27596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:37.848297   27596 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:37.848489   27596 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:46:37.848665   27596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:37.848695   27596 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:46:37.851184   27596 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:37.851600   27596 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:37.851630   27596 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:37.851756   27596 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:46:37.851922   27596 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:46:37.852064   27596 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:46:37.852201   27596 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:46:37.932765   27596 ssh_runner.go:195] Run: systemctl --version
	I0520 10:46:37.939033   27596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:37.954353   27596 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:37.954388   27596 api_server.go:166] Checking apiserver status ...
	I0520 10:46:37.954430   27596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:37.969172   27596 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0520 10:46:37.979382   27596 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:37.979451   27596 ssh_runner.go:195] Run: ls
	I0520 10:46:37.984579   27596 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:37.990654   27596 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:37.990680   27596 status.go:422] ha-570850 apiserver status = Running (err=<nil>)
	I0520 10:46:37.990694   27596 status.go:257] ha-570850 status: &{Name:ha-570850 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:37.990746   27596 status.go:255] checking status of ha-570850-m02 ...
	I0520 10:46:37.991039   27596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:37.991082   27596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:38.007198   27596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43651
	I0520 10:46:38.007687   27596 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:38.008253   27596 main.go:141] libmachine: Using API Version  1
	I0520 10:46:38.008274   27596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:38.008670   27596 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:38.008876   27596 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:46:38.010606   27596 status.go:330] ha-570850-m02 host status = "Running" (err=<nil>)
	I0520 10:46:38.010624   27596 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:46:38.010889   27596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:38.010919   27596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:38.026734   27596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42541
	I0520 10:46:38.027171   27596 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:38.027644   27596 main.go:141] libmachine: Using API Version  1
	I0520 10:46:38.027664   27596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:38.027984   27596 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:38.028220   27596 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:46:38.030992   27596 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:38.031399   27596 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:46:38.031423   27596 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:38.031560   27596 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:46:38.031938   27596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:38.031981   27596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:38.046402   27596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35851
	I0520 10:46:38.046750   27596 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:38.047180   27596 main.go:141] libmachine: Using API Version  1
	I0520 10:46:38.047200   27596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:38.047489   27596 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:38.047686   27596 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:46:38.047873   27596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:38.047892   27596 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:46:38.050490   27596 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:38.050887   27596 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:46:38.050913   27596 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:46:38.051038   27596 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:46:38.051209   27596 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:46:38.051381   27596 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:46:38.051526   27596 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	W0520 10:46:41.101191   27596 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0520 10:46:41.101281   27596 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0520 10:46:41.101310   27596 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:41.101319   27596 status.go:257] ha-570850-m02 status: &{Name:ha-570850-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 10:46:41.101336   27596 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:46:41.101349   27596 status.go:255] checking status of ha-570850-m03 ...
	I0520 10:46:41.101654   27596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:41.101704   27596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:41.120520   27596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0520 10:46:41.121016   27596 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:41.121523   27596 main.go:141] libmachine: Using API Version  1
	I0520 10:46:41.121547   27596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:41.121896   27596 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:41.122072   27596 main.go:141] libmachine: (ha-570850-m03) Calling .GetState
	I0520 10:46:41.123640   27596 status.go:330] ha-570850-m03 host status = "Running" (err=<nil>)
	I0520 10:46:41.123654   27596 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:41.123993   27596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:41.124036   27596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:41.139605   27596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0520 10:46:41.140088   27596 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:41.140561   27596 main.go:141] libmachine: Using API Version  1
	I0520 10:46:41.140592   27596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:41.140871   27596 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:41.141096   27596 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:46:41.144328   27596 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:41.145090   27596 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:41.145124   27596 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:41.145383   27596 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:41.145779   27596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:41.145832   27596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:41.160235   27596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34543
	I0520 10:46:41.160660   27596 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:41.161259   27596 main.go:141] libmachine: Using API Version  1
	I0520 10:46:41.161283   27596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:41.161579   27596 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:41.161763   27596 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:46:41.161950   27596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:41.161970   27596 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:46:41.164600   27596 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:41.165123   27596 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:41.165149   27596 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:41.165285   27596 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:46:41.165459   27596 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:46:41.165627   27596 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:46:41.165752   27596 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:46:41.248384   27596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:41.263829   27596 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:41.263858   27596 api_server.go:166] Checking apiserver status ...
	I0520 10:46:41.263906   27596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:41.284220   27596 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0520 10:46:41.294132   27596 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:41.294191   27596 ssh_runner.go:195] Run: ls
	I0520 10:46:41.298789   27596 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:41.305200   27596 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:41.305222   27596 status.go:422] ha-570850-m03 apiserver status = Running (err=<nil>)
	I0520 10:46:41.305232   27596 status.go:257] ha-570850-m03 status: &{Name:ha-570850-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:41.305261   27596 status.go:255] checking status of ha-570850-m04 ...
	I0520 10:46:41.305588   27596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:41.305635   27596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:41.320831   27596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33909
	I0520 10:46:41.321213   27596 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:41.321675   27596 main.go:141] libmachine: Using API Version  1
	I0520 10:46:41.321694   27596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:41.322003   27596 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:41.322233   27596 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:46:41.323866   27596 status.go:330] ha-570850-m04 host status = "Running" (err=<nil>)
	I0520 10:46:41.323882   27596 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:41.324209   27596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:41.324252   27596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:41.339169   27596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42257
	I0520 10:46:41.339565   27596 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:41.339993   27596 main.go:141] libmachine: Using API Version  1
	I0520 10:46:41.340017   27596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:41.340380   27596 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:41.340587   27596 main.go:141] libmachine: (ha-570850-m04) Calling .GetIP
	I0520 10:46:41.343378   27596 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:41.343782   27596 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:41.343824   27596 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:41.343987   27596 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:41.344392   27596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:41.344434   27596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:41.359420   27596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0520 10:46:41.359789   27596 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:41.360320   27596 main.go:141] libmachine: Using API Version  1
	I0520 10:46:41.360339   27596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:41.360695   27596 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:41.360897   27596 main.go:141] libmachine: (ha-570850-m04) Calling .DriverName
	I0520 10:46:41.361098   27596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:41.361121   27596 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHHostname
	I0520 10:46:41.364043   27596 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:41.364424   27596 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:41.364445   27596 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:41.364607   27596 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHPort
	I0520 10:46:41.364780   27596 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHKeyPath
	I0520 10:46:41.364922   27596 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHUsername
	I0520 10:46:41.365090   27596 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m04/id_rsa Username:docker}
	I0520 10:46:41.445988   27596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:41.464913   27596 status.go:257] ha-570850-m04 status: &{Name:ha-570850-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr: exit status 7 (597.745149ms)

                                                
                                                
-- stdout --
	ha-570850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-570850-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:46:47.128395   27732 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:46:47.128656   27732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:47.128668   27732 out.go:304] Setting ErrFile to fd 2...
	I0520 10:46:47.128672   27732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:47.128879   27732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:46:47.129082   27732 out.go:298] Setting JSON to false
	I0520 10:46:47.129107   27732 mustload.go:65] Loading cluster: ha-570850
	I0520 10:46:47.129149   27732 notify.go:220] Checking for updates...
	I0520 10:46:47.129589   27732 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:46:47.129609   27732 status.go:255] checking status of ha-570850 ...
	I0520 10:46:47.130119   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:47.130166   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:47.148368   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46041
	I0520 10:46:47.148820   27732 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:47.149480   27732 main.go:141] libmachine: Using API Version  1
	I0520 10:46:47.149507   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:47.149906   27732 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:47.150161   27732 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:46:47.152239   27732 status.go:330] ha-570850 host status = "Running" (err=<nil>)
	I0520 10:46:47.152256   27732 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:47.152560   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:47.152609   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:47.167212   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I0520 10:46:47.167629   27732 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:47.168160   27732 main.go:141] libmachine: Using API Version  1
	I0520 10:46:47.168192   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:47.168531   27732 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:47.168707   27732 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:46:47.171757   27732 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:47.172185   27732 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:47.172210   27732 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:47.172351   27732 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:47.172635   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:47.172678   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:47.186963   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0520 10:46:47.187359   27732 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:47.187791   27732 main.go:141] libmachine: Using API Version  1
	I0520 10:46:47.187813   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:47.188133   27732 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:47.188357   27732 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:46:47.188522   27732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:47.188553   27732 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:46:47.191260   27732 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:47.191724   27732 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:47.191754   27732 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:47.191875   27732 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:46:47.192050   27732 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:46:47.192233   27732 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:46:47.192361   27732 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:46:47.272911   27732 ssh_runner.go:195] Run: systemctl --version
	I0520 10:46:47.279771   27732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:47.295043   27732 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:47.295090   27732 api_server.go:166] Checking apiserver status ...
	I0520 10:46:47.295131   27732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:47.308868   27732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0520 10:46:47.320858   27732 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:47.320915   27732 ssh_runner.go:195] Run: ls
	I0520 10:46:47.325799   27732 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:47.330063   27732 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:47.330085   27732 status.go:422] ha-570850 apiserver status = Running (err=<nil>)
	I0520 10:46:47.330094   27732 status.go:257] ha-570850 status: &{Name:ha-570850 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:47.330108   27732 status.go:255] checking status of ha-570850-m02 ...
	I0520 10:46:47.330380   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:47.330433   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:47.346063   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40959
	I0520 10:46:47.346465   27732 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:47.346958   27732 main.go:141] libmachine: Using API Version  1
	I0520 10:46:47.346983   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:47.347286   27732 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:47.347471   27732 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:46:47.349091   27732 status.go:330] ha-570850-m02 host status = "Stopped" (err=<nil>)
	I0520 10:46:47.349107   27732 status.go:343] host is not running, skipping remaining checks
	I0520 10:46:47.349128   27732 status.go:257] ha-570850-m02 status: &{Name:ha-570850-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:47.349148   27732 status.go:255] checking status of ha-570850-m03 ...
	I0520 10:46:47.349508   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:47.349550   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:47.364077   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
	I0520 10:46:47.364488   27732 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:47.364923   27732 main.go:141] libmachine: Using API Version  1
	I0520 10:46:47.364971   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:47.365252   27732 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:47.365423   27732 main.go:141] libmachine: (ha-570850-m03) Calling .GetState
	I0520 10:46:47.366864   27732 status.go:330] ha-570850-m03 host status = "Running" (err=<nil>)
	I0520 10:46:47.366883   27732 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:47.367189   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:47.367223   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:47.381669   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35069
	I0520 10:46:47.382054   27732 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:47.382539   27732 main.go:141] libmachine: Using API Version  1
	I0520 10:46:47.382560   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:47.382858   27732 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:47.383063   27732 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:46:47.385634   27732 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:47.386007   27732 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:47.386027   27732 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:47.386141   27732 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:47.386420   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:47.386456   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:47.401831   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43609
	I0520 10:46:47.402166   27732 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:47.402602   27732 main.go:141] libmachine: Using API Version  1
	I0520 10:46:47.402622   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:47.402915   27732 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:47.403107   27732 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:46:47.403286   27732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:47.403310   27732 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:46:47.405788   27732 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:47.406236   27732 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:47.406273   27732 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:47.406514   27732 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:46:47.406641   27732 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:46:47.406786   27732 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:46:47.406895   27732 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:46:47.488697   27732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:47.503640   27732 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:47.503673   27732 api_server.go:166] Checking apiserver status ...
	I0520 10:46:47.503713   27732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:47.518813   27732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0520 10:46:47.528698   27732 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:47.528753   27732 ssh_runner.go:195] Run: ls
	I0520 10:46:47.533317   27732 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:47.537415   27732 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:47.537433   27732 status.go:422] ha-570850-m03 apiserver status = Running (err=<nil>)
	I0520 10:46:47.537448   27732 status.go:257] ha-570850-m03 status: &{Name:ha-570850-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:47.537462   27732 status.go:255] checking status of ha-570850-m04 ...
	I0520 10:46:47.537726   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:47.537779   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:47.552857   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
	I0520 10:46:47.553288   27732 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:47.553736   27732 main.go:141] libmachine: Using API Version  1
	I0520 10:46:47.553752   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:47.554128   27732 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:47.554301   27732 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:46:47.555656   27732 status.go:330] ha-570850-m04 host status = "Running" (err=<nil>)
	I0520 10:46:47.555670   27732 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:47.555941   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:47.555971   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:47.570576   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36741
	I0520 10:46:47.570931   27732 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:47.571326   27732 main.go:141] libmachine: Using API Version  1
	I0520 10:46:47.571343   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:47.571623   27732 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:47.571786   27732 main.go:141] libmachine: (ha-570850-m04) Calling .GetIP
	I0520 10:46:47.574509   27732 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:47.574887   27732 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:47.574913   27732 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:47.575037   27732 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:47.575333   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:47.575365   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:47.589715   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0520 10:46:47.590084   27732 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:47.590572   27732 main.go:141] libmachine: Using API Version  1
	I0520 10:46:47.590598   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:47.590923   27732 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:47.591112   27732 main.go:141] libmachine: (ha-570850-m04) Calling .DriverName
	I0520 10:46:47.591308   27732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:47.591330   27732 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHHostname
	I0520 10:46:47.593877   27732 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:47.594283   27732 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:47.594309   27732 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:47.594417   27732 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHPort
	I0520 10:46:47.594576   27732 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHKeyPath
	I0520 10:46:47.594726   27732 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHUsername
	I0520 10:46:47.594863   27732 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m04/id_rsa Username:docker}
	I0520 10:46:47.672573   27732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:47.686798   27732 status.go:257] ha-570850-m04 status: &{Name:ha-570850-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr: exit status 7 (611.625738ms)

                                                
                                                
-- stdout --
	ha-570850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-570850-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:46:55.554973   27821 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:46:55.555226   27821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:55.555236   27821 out.go:304] Setting ErrFile to fd 2...
	I0520 10:46:55.555240   27821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:55.555393   27821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:46:55.555541   27821 out.go:298] Setting JSON to false
	I0520 10:46:55.555565   27821 mustload.go:65] Loading cluster: ha-570850
	I0520 10:46:55.555607   27821 notify.go:220] Checking for updates...
	I0520 10:46:55.556080   27821 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:46:55.556101   27821 status.go:255] checking status of ha-570850 ...
	I0520 10:46:55.556486   27821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:55.556541   27821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:55.571120   27821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
	I0520 10:46:55.571534   27821 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:55.572129   27821 main.go:141] libmachine: Using API Version  1
	I0520 10:46:55.572149   27821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:55.572533   27821 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:55.572727   27821 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:46:55.574368   27821 status.go:330] ha-570850 host status = "Running" (err=<nil>)
	I0520 10:46:55.574391   27821 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:55.574679   27821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:55.574729   27821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:55.589966   27821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0520 10:46:55.590380   27821 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:55.590916   27821 main.go:141] libmachine: Using API Version  1
	I0520 10:46:55.590940   27821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:55.591337   27821 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:55.591566   27821 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:46:55.594415   27821 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:55.594829   27821 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:55.594864   27821 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:55.594984   27821 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:46:55.595268   27821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:55.595308   27821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:55.611064   27821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0520 10:46:55.611436   27821 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:55.611829   27821 main.go:141] libmachine: Using API Version  1
	I0520 10:46:55.611859   27821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:55.612168   27821 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:55.612365   27821 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:46:55.612523   27821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:55.612555   27821 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:46:55.615037   27821 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:55.615449   27821 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:46:55.615473   27821 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:46:55.615614   27821 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:46:55.615753   27821 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:46:55.615909   27821 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:46:55.616021   27821 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:46:55.700744   27821 ssh_runner.go:195] Run: systemctl --version
	I0520 10:46:55.708134   27821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:55.729192   27821 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:55.729222   27821 api_server.go:166] Checking apiserver status ...
	I0520 10:46:55.729254   27821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:55.744425   27821 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0520 10:46:55.754246   27821 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:55.754305   27821 ssh_runner.go:195] Run: ls
	I0520 10:46:55.759418   27821 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:55.763566   27821 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:55.763585   27821 status.go:422] ha-570850 apiserver status = Running (err=<nil>)
	I0520 10:46:55.763594   27821 status.go:257] ha-570850 status: &{Name:ha-570850 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:55.763614   27821 status.go:255] checking status of ha-570850-m02 ...
	I0520 10:46:55.763937   27821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:55.763970   27821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:55.780229   27821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33319
	I0520 10:46:55.780615   27821 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:55.781081   27821 main.go:141] libmachine: Using API Version  1
	I0520 10:46:55.781104   27821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:55.781409   27821 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:55.781622   27821 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:46:55.783049   27821 status.go:330] ha-570850-m02 host status = "Stopped" (err=<nil>)
	I0520 10:46:55.783062   27821 status.go:343] host is not running, skipping remaining checks
	I0520 10:46:55.783068   27821 status.go:257] ha-570850-m02 status: &{Name:ha-570850-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:55.783081   27821 status.go:255] checking status of ha-570850-m03 ...
	I0520 10:46:55.783330   27821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:55.783367   27821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:55.797697   27821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0520 10:46:55.798075   27821 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:55.798503   27821 main.go:141] libmachine: Using API Version  1
	I0520 10:46:55.798526   27821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:55.798794   27821 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:55.798962   27821 main.go:141] libmachine: (ha-570850-m03) Calling .GetState
	I0520 10:46:55.800378   27821 status.go:330] ha-570850-m03 host status = "Running" (err=<nil>)
	I0520 10:46:55.800395   27821 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:55.800662   27821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:55.800698   27821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:55.816612   27821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39889
	I0520 10:46:55.816994   27821 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:55.817415   27821 main.go:141] libmachine: Using API Version  1
	I0520 10:46:55.817437   27821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:55.817723   27821 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:55.817930   27821 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:46:55.820555   27821 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:55.820999   27821 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:55.821018   27821 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:55.821190   27821 host.go:66] Checking if "ha-570850-m03" exists ...
	I0520 10:46:55.821513   27821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:55.821552   27821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:55.836140   27821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I0520 10:46:55.836603   27821 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:55.837188   27821 main.go:141] libmachine: Using API Version  1
	I0520 10:46:55.837211   27821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:55.837565   27821 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:55.837775   27821 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:46:55.837936   27821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:55.837969   27821 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:46:55.840750   27821 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:55.841174   27821 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:46:55.841206   27821 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:46:55.841315   27821 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:46:55.841465   27821 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:46:55.841609   27821 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:46:55.841751   27821 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:46:55.925530   27821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:55.940660   27821 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:46:55.940685   27821 api_server.go:166] Checking apiserver status ...
	I0520 10:46:55.940713   27821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:46:55.956529   27821 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup
	W0520 10:46:55.966636   27821 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1549/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:46:55.966694   27821 ssh_runner.go:195] Run: ls
	I0520 10:46:55.971300   27821 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:46:55.975944   27821 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:46:55.975969   27821 status.go:422] ha-570850-m03 apiserver status = Running (err=<nil>)
	I0520 10:46:55.975980   27821 status.go:257] ha-570850-m03 status: &{Name:ha-570850-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:46:55.976016   27821 status.go:255] checking status of ha-570850-m04 ...
	I0520 10:46:55.976404   27821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:55.976447   27821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:55.992315   27821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35327
	I0520 10:46:55.992718   27821 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:55.993316   27821 main.go:141] libmachine: Using API Version  1
	I0520 10:46:55.993346   27821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:55.993639   27821 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:55.993855   27821 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:46:55.995248   27821 status.go:330] ha-570850-m04 host status = "Running" (err=<nil>)
	I0520 10:46:55.995261   27821 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:55.995515   27821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:55.995545   27821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:56.009931   27821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39393
	I0520 10:46:56.010362   27821 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:56.010808   27821 main.go:141] libmachine: Using API Version  1
	I0520 10:46:56.010833   27821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:56.011158   27821 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:56.011307   27821 main.go:141] libmachine: (ha-570850-m04) Calling .GetIP
	I0520 10:46:56.014038   27821 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:56.014445   27821 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:56.014472   27821 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:56.014628   27821 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:46:56.014919   27821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:56.014956   27821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:56.029888   27821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43421
	I0520 10:46:56.030279   27821 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:56.030724   27821 main.go:141] libmachine: Using API Version  1
	I0520 10:46:56.030743   27821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:56.031050   27821 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:56.031223   27821 main.go:141] libmachine: (ha-570850-m04) Calling .DriverName
	I0520 10:46:56.031392   27821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:46:56.031416   27821 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHHostname
	I0520 10:46:56.034200   27821 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:56.034624   27821 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:56.034660   27821 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:56.034770   27821 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHPort
	I0520 10:46:56.034946   27821 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHKeyPath
	I0520 10:46:56.035081   27821 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHUsername
	I0520 10:46:56.035232   27821 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m04/id_rsa Username:docker}
	I0520 10:46:56.112473   27821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:46:56.126873   27821 status.go:257] ha-570850-m04 status: &{Name:ha-570850-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-570850 -n ha-570850
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-570850 logs -n 25: (1.388861112s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850:/home/docker/cp-test_ha-570850-m03_ha-570850.txt                       |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850 sudo cat                                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m03_ha-570850.txt                                 |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m02:/home/docker/cp-test_ha-570850-m03_ha-570850-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m02 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m03_ha-570850-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04:/home/docker/cp-test_ha-570850-m03_ha-570850-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m04 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m03_ha-570850-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp testdata/cp-test.txt                                                | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2002648180/001/cp-test_ha-570850-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850:/home/docker/cp-test_ha-570850-m04_ha-570850.txt                       |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850 sudo cat                                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850.txt                                 |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m02:/home/docker/cp-test_ha-570850-m04_ha-570850-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m02 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03:/home/docker/cp-test_ha-570850-m04_ha-570850-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m03 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-570850 node stop m02 -v=7                                                     | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-570850 node start m02 -v=7                                                    | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:38:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:38:11.730169   22116 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:38:11.730390   22116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:38:11.730398   22116 out.go:304] Setting ErrFile to fd 2...
	I0520 10:38:11.730402   22116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:38:11.730560   22116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:38:11.731084   22116 out.go:298] Setting JSON to false
	I0520 10:38:11.731809   22116 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1235,"bootTime":1716200257,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 10:38:11.731860   22116 start.go:139] virtualization: kvm guest
	I0520 10:38:11.733921   22116 out.go:177] * [ha-570850] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 10:38:11.735658   22116 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:38:11.737019   22116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:38:11.735634   22116 notify.go:220] Checking for updates...
	I0520 10:38:11.739319   22116 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:38:11.740477   22116 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:38:11.741867   22116 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 10:38:11.743183   22116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:38:11.744547   22116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:38:11.779115   22116 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 10:38:11.780420   22116 start.go:297] selected driver: kvm2
	I0520 10:38:11.780431   22116 start.go:901] validating driver "kvm2" against <nil>
	I0520 10:38:11.780441   22116 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:38:11.781080   22116 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:38:11.781145   22116 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 10:38:11.795133   22116 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 10:38:11.795167   22116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:38:11.795348   22116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:38:11.795372   22116 cni.go:84] Creating CNI manager for ""
	I0520 10:38:11.795378   22116 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0520 10:38:11.795384   22116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 10:38:11.795428   22116 start.go:340] cluster config:
	{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0520 10:38:11.795505   22116 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:38:11.797209   22116 out.go:177] * Starting "ha-570850" primary control-plane node in "ha-570850" cluster
	I0520 10:38:11.798340   22116 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:38:11.798364   22116 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 10:38:11.798370   22116 cache.go:56] Caching tarball of preloaded images
	I0520 10:38:11.798449   22116 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 10:38:11.798469   22116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:38:11.798736   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:38:11.798754   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json: {Name:mk35df54ed8a47044341ed5a7e95083f964ddf72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:11.798885   22116 start.go:360] acquireMachinesLock for ha-570850: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 10:38:11.798917   22116 start.go:364] duration metric: took 17.621µs to acquireMachinesLock for "ha-570850"
	I0520 10:38:11.798939   22116 start.go:93] Provisioning new machine with config: &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:38:11.798992   22116 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 10:38:11.800532   22116 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 10:38:11.800653   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:38:11.800703   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:38:11.813982   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0520 10:38:11.814361   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:38:11.814827   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:38:11.814854   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:38:11.815138   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:38:11.815335   22116 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:38:11.815461   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:11.815593   22116 start.go:159] libmachine.API.Create for "ha-570850" (driver="kvm2")
	I0520 10:38:11.815630   22116 client.go:168] LocalClient.Create starting
	I0520 10:38:11.815663   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 10:38:11.815699   22116 main.go:141] libmachine: Decoding PEM data...
	I0520 10:38:11.815718   22116 main.go:141] libmachine: Parsing certificate...
	I0520 10:38:11.815784   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 10:38:11.815815   22116 main.go:141] libmachine: Decoding PEM data...
	I0520 10:38:11.815835   22116 main.go:141] libmachine: Parsing certificate...
	I0520 10:38:11.815866   22116 main.go:141] libmachine: Running pre-create checks...
	I0520 10:38:11.815879   22116 main.go:141] libmachine: (ha-570850) Calling .PreCreateCheck
	I0520 10:38:11.816181   22116 main.go:141] libmachine: (ha-570850) Calling .GetConfigRaw
	I0520 10:38:11.816528   22116 main.go:141] libmachine: Creating machine...
	I0520 10:38:11.816542   22116 main.go:141] libmachine: (ha-570850) Calling .Create
	I0520 10:38:11.816684   22116 main.go:141] libmachine: (ha-570850) Creating KVM machine...
	I0520 10:38:11.817947   22116 main.go:141] libmachine: (ha-570850) DBG | found existing default KVM network
	I0520 10:38:11.818627   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:11.818492   22138 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0520 10:38:11.818659   22116 main.go:141] libmachine: (ha-570850) DBG | created network xml: 
	I0520 10:38:11.818679   22116 main.go:141] libmachine: (ha-570850) DBG | <network>
	I0520 10:38:11.818690   22116 main.go:141] libmachine: (ha-570850) DBG |   <name>mk-ha-570850</name>
	I0520 10:38:11.818701   22116 main.go:141] libmachine: (ha-570850) DBG |   <dns enable='no'/>
	I0520 10:38:11.818714   22116 main.go:141] libmachine: (ha-570850) DBG |   
	I0520 10:38:11.818724   22116 main.go:141] libmachine: (ha-570850) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 10:38:11.818737   22116 main.go:141] libmachine: (ha-570850) DBG |     <dhcp>
	I0520 10:38:11.818750   22116 main.go:141] libmachine: (ha-570850) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 10:38:11.818762   22116 main.go:141] libmachine: (ha-570850) DBG |     </dhcp>
	I0520 10:38:11.818775   22116 main.go:141] libmachine: (ha-570850) DBG |   </ip>
	I0520 10:38:11.818787   22116 main.go:141] libmachine: (ha-570850) DBG |   
	I0520 10:38:11.818797   22116 main.go:141] libmachine: (ha-570850) DBG | </network>
	I0520 10:38:11.818810   22116 main.go:141] libmachine: (ha-570850) DBG | 
	I0520 10:38:11.823649   22116 main.go:141] libmachine: (ha-570850) DBG | trying to create private KVM network mk-ha-570850 192.168.39.0/24...
	I0520 10:38:11.884902   22116 main.go:141] libmachine: (ha-570850) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850 ...
	I0520 10:38:11.884949   22116 main.go:141] libmachine: (ha-570850) DBG | private KVM network mk-ha-570850 192.168.39.0/24 created
	I0520 10:38:11.884999   22116 main.go:141] libmachine: (ha-570850) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 10:38:11.885064   22116 main.go:141] libmachine: (ha-570850) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 10:38:11.885109   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:11.884856   22138 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:38:12.109817   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:12.109638   22138 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa...
	I0520 10:38:12.296359   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:12.296233   22138 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/ha-570850.rawdisk...
	I0520 10:38:12.296396   22116 main.go:141] libmachine: (ha-570850) DBG | Writing magic tar header
	I0520 10:38:12.296407   22116 main.go:141] libmachine: (ha-570850) DBG | Writing SSH key tar header
	I0520 10:38:12.296416   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:12.296339   22138 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850 ...
	I0520 10:38:12.296455   22116 main.go:141] libmachine: (ha-570850) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850 (perms=drwx------)
	I0520 10:38:12.296473   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850
	I0520 10:38:12.296482   22116 main.go:141] libmachine: (ha-570850) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 10:38:12.296512   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 10:38:12.296552   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:38:12.296569   22116 main.go:141] libmachine: (ha-570850) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 10:38:12.296587   22116 main.go:141] libmachine: (ha-570850) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 10:38:12.296601   22116 main.go:141] libmachine: (ha-570850) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 10:38:12.296619   22116 main.go:141] libmachine: (ha-570850) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 10:38:12.296638   22116 main.go:141] libmachine: (ha-570850) Creating domain...
	I0520 10:38:12.296654   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 10:38:12.296674   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 10:38:12.296686   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home/jenkins
	I0520 10:38:12.296708   22116 main.go:141] libmachine: (ha-570850) DBG | Checking permissions on dir: /home
	I0520 10:38:12.296720   22116 main.go:141] libmachine: (ha-570850) DBG | Skipping /home - not owner
	I0520 10:38:12.297582   22116 main.go:141] libmachine: (ha-570850) define libvirt domain using xml: 
	I0520 10:38:12.297595   22116 main.go:141] libmachine: (ha-570850) <domain type='kvm'>
	I0520 10:38:12.297600   22116 main.go:141] libmachine: (ha-570850)   <name>ha-570850</name>
	I0520 10:38:12.297605   22116 main.go:141] libmachine: (ha-570850)   <memory unit='MiB'>2200</memory>
	I0520 10:38:12.297611   22116 main.go:141] libmachine: (ha-570850)   <vcpu>2</vcpu>
	I0520 10:38:12.297617   22116 main.go:141] libmachine: (ha-570850)   <features>
	I0520 10:38:12.297625   22116 main.go:141] libmachine: (ha-570850)     <acpi/>
	I0520 10:38:12.297636   22116 main.go:141] libmachine: (ha-570850)     <apic/>
	I0520 10:38:12.297644   22116 main.go:141] libmachine: (ha-570850)     <pae/>
	I0520 10:38:12.297659   22116 main.go:141] libmachine: (ha-570850)     
	I0520 10:38:12.297688   22116 main.go:141] libmachine: (ha-570850)   </features>
	I0520 10:38:12.297706   22116 main.go:141] libmachine: (ha-570850)   <cpu mode='host-passthrough'>
	I0520 10:38:12.297712   22116 main.go:141] libmachine: (ha-570850)   
	I0520 10:38:12.297717   22116 main.go:141] libmachine: (ha-570850)   </cpu>
	I0520 10:38:12.297724   22116 main.go:141] libmachine: (ha-570850)   <os>
	I0520 10:38:12.297729   22116 main.go:141] libmachine: (ha-570850)     <type>hvm</type>
	I0520 10:38:12.297735   22116 main.go:141] libmachine: (ha-570850)     <boot dev='cdrom'/>
	I0520 10:38:12.297740   22116 main.go:141] libmachine: (ha-570850)     <boot dev='hd'/>
	I0520 10:38:12.297745   22116 main.go:141] libmachine: (ha-570850)     <bootmenu enable='no'/>
	I0520 10:38:12.297749   22116 main.go:141] libmachine: (ha-570850)   </os>
	I0520 10:38:12.297792   22116 main.go:141] libmachine: (ha-570850)   <devices>
	I0520 10:38:12.297809   22116 main.go:141] libmachine: (ha-570850)     <disk type='file' device='cdrom'>
	I0520 10:38:12.297825   22116 main.go:141] libmachine: (ha-570850)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/boot2docker.iso'/>
	I0520 10:38:12.297836   22116 main.go:141] libmachine: (ha-570850)       <target dev='hdc' bus='scsi'/>
	I0520 10:38:12.297860   22116 main.go:141] libmachine: (ha-570850)       <readonly/>
	I0520 10:38:12.297880   22116 main.go:141] libmachine: (ha-570850)     </disk>
	I0520 10:38:12.297901   22116 main.go:141] libmachine: (ha-570850)     <disk type='file' device='disk'>
	I0520 10:38:12.297920   22116 main.go:141] libmachine: (ha-570850)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 10:38:12.297938   22116 main.go:141] libmachine: (ha-570850)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/ha-570850.rawdisk'/>
	I0520 10:38:12.297949   22116 main.go:141] libmachine: (ha-570850)       <target dev='hda' bus='virtio'/>
	I0520 10:38:12.297959   22116 main.go:141] libmachine: (ha-570850)     </disk>
	I0520 10:38:12.297970   22116 main.go:141] libmachine: (ha-570850)     <interface type='network'>
	I0520 10:38:12.297981   22116 main.go:141] libmachine: (ha-570850)       <source network='mk-ha-570850'/>
	I0520 10:38:12.297995   22116 main.go:141] libmachine: (ha-570850)       <model type='virtio'/>
	I0520 10:38:12.298004   22116 main.go:141] libmachine: (ha-570850)     </interface>
	I0520 10:38:12.298009   22116 main.go:141] libmachine: (ha-570850)     <interface type='network'>
	I0520 10:38:12.298017   22116 main.go:141] libmachine: (ha-570850)       <source network='default'/>
	I0520 10:38:12.298023   22116 main.go:141] libmachine: (ha-570850)       <model type='virtio'/>
	I0520 10:38:12.298030   22116 main.go:141] libmachine: (ha-570850)     </interface>
	I0520 10:38:12.298036   22116 main.go:141] libmachine: (ha-570850)     <serial type='pty'>
	I0520 10:38:12.298043   22116 main.go:141] libmachine: (ha-570850)       <target port='0'/>
	I0520 10:38:12.298053   22116 main.go:141] libmachine: (ha-570850)     </serial>
	I0520 10:38:12.298061   22116 main.go:141] libmachine: (ha-570850)     <console type='pty'>
	I0520 10:38:12.298071   22116 main.go:141] libmachine: (ha-570850)       <target type='serial' port='0'/>
	I0520 10:38:12.298082   22116 main.go:141] libmachine: (ha-570850)     </console>
	I0520 10:38:12.298089   22116 main.go:141] libmachine: (ha-570850)     <rng model='virtio'>
	I0520 10:38:12.298101   22116 main.go:141] libmachine: (ha-570850)       <backend model='random'>/dev/random</backend>
	I0520 10:38:12.298107   22116 main.go:141] libmachine: (ha-570850)     </rng>
	I0520 10:38:12.298115   22116 main.go:141] libmachine: (ha-570850)     
	I0520 10:38:12.298139   22116 main.go:141] libmachine: (ha-570850)     
	I0520 10:38:12.298151   22116 main.go:141] libmachine: (ha-570850)   </devices>
	I0520 10:38:12.298160   22116 main.go:141] libmachine: (ha-570850) </domain>
	I0520 10:38:12.298169   22116 main.go:141] libmachine: (ha-570850) 
	I0520 10:38:12.301990   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:c6:ad:99 in network default
	I0520 10:38:12.302507   22116 main.go:141] libmachine: (ha-570850) Ensuring networks are active...
	I0520 10:38:12.302538   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:12.303079   22116 main.go:141] libmachine: (ha-570850) Ensuring network default is active
	I0520 10:38:12.303345   22116 main.go:141] libmachine: (ha-570850) Ensuring network mk-ha-570850 is active
	I0520 10:38:12.303779   22116 main.go:141] libmachine: (ha-570850) Getting domain xml...
	I0520 10:38:12.304404   22116 main.go:141] libmachine: (ha-570850) Creating domain...
	I0520 10:38:13.466125   22116 main.go:141] libmachine: (ha-570850) Waiting to get IP...
	I0520 10:38:13.467034   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:13.467385   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:13.467456   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:13.467388   22138 retry.go:31] will retry after 268.975125ms: waiting for machine to come up
	I0520 10:38:13.737798   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:13.738291   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:13.738322   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:13.738178   22138 retry.go:31] will retry after 298.444753ms: waiting for machine to come up
	I0520 10:38:14.038705   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:14.039099   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:14.039130   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:14.039032   22138 retry.go:31] will retry after 456.517682ms: waiting for machine to come up
	I0520 10:38:14.497606   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:14.497927   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:14.497957   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:14.497894   22138 retry.go:31] will retry after 541.99406ms: waiting for machine to come up
	I0520 10:38:15.041607   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:15.042132   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:15.042159   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:15.042073   22138 retry.go:31] will retry after 616.954895ms: waiting for machine to come up
	I0520 10:38:15.660817   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:15.661199   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:15.661230   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:15.661143   22138 retry.go:31] will retry after 627.522657ms: waiting for machine to come up
	I0520 10:38:16.289885   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:16.290316   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:16.290348   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:16.290263   22138 retry.go:31] will retry after 1.116025556s: waiting for machine to come up
	I0520 10:38:17.408172   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:17.408594   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:17.408624   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:17.408554   22138 retry.go:31] will retry after 1.070059394s: waiting for machine to come up
	I0520 10:38:18.480688   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:18.481013   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:18.481075   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:18.481003   22138 retry.go:31] will retry after 1.30907699s: waiting for machine to come up
	I0520 10:38:19.792510   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:19.792901   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:19.792935   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:19.792838   22138 retry.go:31] will retry after 2.051452481s: waiting for machine to come up
	I0520 10:38:21.846242   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:21.846727   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:21.846756   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:21.846669   22138 retry.go:31] will retry after 2.633421547s: waiting for machine to come up
	I0520 10:38:24.482945   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:24.483298   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:24.483325   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:24.483261   22138 retry.go:31] will retry after 2.992536845s: waiting for machine to come up
	I0520 10:38:27.476848   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:27.477274   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:27.477294   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:27.477229   22138 retry.go:31] will retry after 3.075289169s: waiting for machine to come up
	I0520 10:38:30.556330   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:30.556672   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find current IP address of domain ha-570850 in network mk-ha-570850
	I0520 10:38:30.556695   22116 main.go:141] libmachine: (ha-570850) DBG | I0520 10:38:30.556626   22138 retry.go:31] will retry after 3.648633436s: waiting for machine to come up
	I0520 10:38:34.207997   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.208427   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has current primary IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.208458   22116 main.go:141] libmachine: (ha-570850) Found IP for machine: 192.168.39.152
	I0520 10:38:34.208472   22116 main.go:141] libmachine: (ha-570850) Reserving static IP address...
	I0520 10:38:34.208767   22116 main.go:141] libmachine: (ha-570850) DBG | unable to find host DHCP lease matching {name: "ha-570850", mac: "52:54:00:8e:5a:58", ip: "192.168.39.152"} in network mk-ha-570850
	I0520 10:38:34.278371   22116 main.go:141] libmachine: (ha-570850) DBG | Getting to WaitForSSH function...
	I0520 10:38:34.278403   22116 main.go:141] libmachine: (ha-570850) Reserved static IP address: 192.168.39.152
	I0520 10:38:34.278436   22116 main.go:141] libmachine: (ha-570850) Waiting for SSH to be available...
	I0520 10:38:34.280972   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.281413   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.281441   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.281561   22116 main.go:141] libmachine: (ha-570850) DBG | Using SSH client type: external
	I0520 10:38:34.281582   22116 main.go:141] libmachine: (ha-570850) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa (-rw-------)
	I0520 10:38:34.281598   22116 main.go:141] libmachine: (ha-570850) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 10:38:34.281604   22116 main.go:141] libmachine: (ha-570850) DBG | About to run SSH command:
	I0520 10:38:34.281613   22116 main.go:141] libmachine: (ha-570850) DBG | exit 0
	I0520 10:38:34.404737   22116 main.go:141] libmachine: (ha-570850) DBG | SSH cmd err, output: <nil>: 
	I0520 10:38:34.405061   22116 main.go:141] libmachine: (ha-570850) KVM machine creation complete!
	I0520 10:38:34.405392   22116 main.go:141] libmachine: (ha-570850) Calling .GetConfigRaw
	I0520 10:38:34.405928   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:34.406118   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:34.406280   22116 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 10:38:34.406292   22116 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:38:34.407540   22116 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 10:38:34.407553   22116 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 10:38:34.407558   22116 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 10:38:34.407564   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:34.410258   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.410611   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.410625   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.410794   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:34.410968   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.411111   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.411239   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:34.411390   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:38:34.411597   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:38:34.411608   22116 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 10:38:34.516209   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:38:34.516239   22116 main.go:141] libmachine: Detecting the provisioner...
	I0520 10:38:34.516250   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:34.519042   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.519373   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.519398   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.519530   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:34.519726   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.519891   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.520024   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:34.520172   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:38:34.520328   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:38:34.520337   22116 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 10:38:34.625625   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 10:38:34.625697   22116 main.go:141] libmachine: found compatible host: buildroot
	I0520 10:38:34.625709   22116 main.go:141] libmachine: Provisioning with buildroot...
	I0520 10:38:34.625717   22116 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:38:34.625957   22116 buildroot.go:166] provisioning hostname "ha-570850"
	I0520 10:38:34.625987   22116 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:38:34.626156   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:34.628692   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.629065   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.629090   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.629260   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:34.629450   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.629603   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.629746   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:34.629957   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:38:34.630188   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:38:34.630205   22116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-570850 && echo "ha-570850" | sudo tee /etc/hostname
	I0520 10:38:34.751379   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850
	
	I0520 10:38:34.751404   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:34.754140   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.754450   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.754482   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.754653   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:34.754877   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.755018   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.755170   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:34.755331   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:38:34.755511   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:38:34.755527   22116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-570850' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-570850/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-570850' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:38:34.870336   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:38:34.870362   22116 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 10:38:34.870406   22116 buildroot.go:174] setting up certificates
	I0520 10:38:34.870417   22116 provision.go:84] configureAuth start
	I0520 10:38:34.870433   22116 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:38:34.870673   22116 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:38:34.873186   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.873425   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.873449   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.873597   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:34.875678   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.875991   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.876013   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.876166   22116 provision.go:143] copyHostCerts
	I0520 10:38:34.876193   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:38:34.876226   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 10:38:34.876238   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:38:34.876320   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 10:38:34.876441   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:38:34.876474   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 10:38:34.876483   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:38:34.876526   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 10:38:34.876578   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:38:34.876601   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 10:38:34.876610   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:38:34.876642   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 10:38:34.876706   22116 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.ha-570850 san=[127.0.0.1 192.168.39.152 ha-570850 localhost minikube]
	I0520 10:38:34.996437   22116 provision.go:177] copyRemoteCerts
	I0520 10:38:34.996488   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:38:34.996509   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:34.999105   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.999426   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:34.999452   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:34.999600   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:34.999799   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:34.999965   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:35.000112   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:38:35.085846   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 10:38:35.085903   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:38:35.112118   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 10:38:35.112204   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0520 10:38:35.137660   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 10:38:35.137711   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 10:38:35.162846   22116 provision.go:87] duration metric: took 292.413892ms to configureAuth
	I0520 10:38:35.162874   22116 buildroot.go:189] setting minikube options for container-runtime
	I0520 10:38:35.163047   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:38:35.163177   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:35.165712   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.165972   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.166006   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.166163   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:35.166345   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.166513   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.166625   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:35.166801   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:38:35.167026   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:38:35.167051   22116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:38:35.458789   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:38:35.458816   22116 main.go:141] libmachine: Checking connection to Docker...
	I0520 10:38:35.458833   22116 main.go:141] libmachine: (ha-570850) Calling .GetURL
	I0520 10:38:35.460116   22116 main.go:141] libmachine: (ha-570850) DBG | Using libvirt version 6000000
	I0520 10:38:35.462456   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.462751   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.462782   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.462899   22116 main.go:141] libmachine: Docker is up and running!
	I0520 10:38:35.462911   22116 main.go:141] libmachine: Reticulating splines...
	I0520 10:38:35.462917   22116 client.go:171] duration metric: took 23.647277559s to LocalClient.Create
	I0520 10:38:35.462940   22116 start.go:167] duration metric: took 23.647347204s to libmachine.API.Create "ha-570850"
	I0520 10:38:35.462986   22116 start.go:293] postStartSetup for "ha-570850" (driver="kvm2")
	I0520 10:38:35.463002   22116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:38:35.463027   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:35.463279   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:38:35.463300   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:35.465303   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.465619   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.465639   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.465789   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:35.465970   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.466134   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:35.466242   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:38:35.547765   22116 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:38:35.552086   22116 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 10:38:35.552104   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 10:38:35.552157   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 10:38:35.552228   22116 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 10:38:35.552237   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /etc/ssl/certs/111622.pem
	I0520 10:38:35.552330   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 10:38:35.562382   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:38:35.586151   22116 start.go:296] duration metric: took 123.148208ms for postStartSetup
	I0520 10:38:35.586203   22116 main.go:141] libmachine: (ha-570850) Calling .GetConfigRaw
	I0520 10:38:35.586722   22116 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:38:35.589272   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.589606   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.589635   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.589886   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:38:35.590052   22116 start.go:128] duration metric: took 23.791050927s to createHost
	I0520 10:38:35.590073   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:35.592357   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.592668   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.592698   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.592809   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:35.593009   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.593167   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.593296   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:35.593430   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:38:35.593627   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:38:35.593687   22116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 10:38:35.697911   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716201515.672299621
	
	I0520 10:38:35.697931   22116 fix.go:216] guest clock: 1716201515.672299621
	I0520 10:38:35.697938   22116 fix.go:229] Guest: 2024-05-20 10:38:35.672299621 +0000 UTC Remote: 2024-05-20 10:38:35.590063475 +0000 UTC m=+23.895467439 (delta=82.236146ms)
	I0520 10:38:35.697961   22116 fix.go:200] guest clock delta is within tolerance: 82.236146ms
	I0520 10:38:35.697966   22116 start.go:83] releasing machines lock for "ha-570850", held for 23.899037938s
	I0520 10:38:35.697982   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:35.698245   22116 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:38:35.700732   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.701131   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.701155   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.701284   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:35.701798   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:35.701972   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:38:35.702046   22116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:38:35.702077   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:35.702169   22116 ssh_runner.go:195] Run: cat /version.json
	I0520 10:38:35.702187   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:38:35.704775   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.705033   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.705183   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.705210   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.705342   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:35.705432   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:35.705461   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:35.705524   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.705610   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:38:35.705700   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:35.705772   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:38:35.705884   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:38:35.705910   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:38:35.706065   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	W0520 10:38:35.786235   22116 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 10:38:35.786343   22116 ssh_runner.go:195] Run: systemctl --version
	I0520 10:38:35.806121   22116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:38:35.960092   22116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 10:38:35.966066   22116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 10:38:35.966125   22116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:38:35.981958   22116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 10:38:35.981984   22116 start.go:494] detecting cgroup driver to use...
	I0520 10:38:35.982072   22116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:38:35.999580   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:38:36.014620   22116 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:38:36.014683   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:38:36.029032   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:38:36.043261   22116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:38:36.170229   22116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:38:36.325354   22116 docker.go:233] disabling docker service ...
	I0520 10:38:36.325424   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:38:36.339908   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:38:36.352965   22116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:38:36.473506   22116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:38:36.593087   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:38:36.607149   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:38:36.626143   22116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:38:36.626207   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.636479   22116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:38:36.636537   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.646607   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.656447   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.666481   22116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:38:36.677165   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.687069   22116 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.703838   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:38:36.713638   22116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:38:36.722797   22116 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 10:38:36.722839   22116 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 10:38:36.736966   22116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:38:36.746694   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:38:36.860575   22116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:38:37.002954   22116 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:38:37.003017   22116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:38:37.007631   22116 start.go:562] Will wait 60s for crictl version
	I0520 10:38:37.007681   22116 ssh_runner.go:195] Run: which crictl
	I0520 10:38:37.011352   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:38:37.050359   22116 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 10:38:37.050449   22116 ssh_runner.go:195] Run: crio --version
	I0520 10:38:37.078126   22116 ssh_runner.go:195] Run: crio --version
	I0520 10:38:37.107078   22116 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 10:38:37.108468   22116 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:38:37.111104   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:37.111433   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:38:37.111456   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:38:37.111693   22116 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 10:38:37.115693   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:38:37.129069   22116 kubeadm.go:877] updating cluster {Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 10:38:37.129178   22116 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:38:37.129219   22116 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:38:37.164768   22116 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 10:38:37.164843   22116 ssh_runner.go:195] Run: which lz4
	I0520 10:38:37.168726   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0520 10:38:37.168820   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 10:38:37.172955   22116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 10:38:37.172984   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 10:38:38.576176   22116 crio.go:462] duration metric: took 1.407378501s to copy over tarball
	I0520 10:38:38.576242   22116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 10:38:40.680099   22116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10381902s)
	I0520 10:38:40.680142   22116 crio.go:469] duration metric: took 2.103938293s to extract the tarball
	I0520 10:38:40.680152   22116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 10:38:40.717304   22116 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:38:40.759915   22116 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:38:40.759937   22116 cache_images.go:84] Images are preloaded, skipping loading
	I0520 10:38:40.759944   22116 kubeadm.go:928] updating node { 192.168.39.152 8443 v1.30.1 crio true true} ...
	I0520 10:38:40.760047   22116 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-570850 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:38:40.760109   22116 ssh_runner.go:195] Run: crio config
	I0520 10:38:40.805487   22116 cni.go:84] Creating CNI manager for ""
	I0520 10:38:40.805506   22116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 10:38:40.805520   22116 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 10:38:40.805545   22116 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-570850 NodeName:ha-570850 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 10:38:40.805697   22116 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-570850"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 10:38:40.805721   22116 kube-vip.go:115] generating kube-vip config ...
	I0520 10:38:40.805774   22116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 10:38:40.823953   22116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 10:38:40.824070   22116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0520 10:38:40.824132   22116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:38:40.834205   22116 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 10:38:40.834282   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 10:38:40.843485   22116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0520 10:38:40.859593   22116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:38:40.875643   22116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0520 10:38:40.891603   22116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0520 10:38:40.907772   22116 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 10:38:40.911554   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:38:40.923517   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:38:41.048526   22116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:38:41.065763   22116 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850 for IP: 192.168.39.152
	I0520 10:38:41.065783   22116 certs.go:194] generating shared ca certs ...
	I0520 10:38:41.065797   22116 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.065989   22116 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 10:38:41.066050   22116 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 10:38:41.066064   22116 certs.go:256] generating profile certs ...
	I0520 10:38:41.066126   22116 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key
	I0520 10:38:41.066146   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.crt with IP's: []
	I0520 10:38:41.223062   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.crt ...
	I0520 10:38:41.223091   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.crt: {Name:mk1c16122880ad6a3b17b6d429622e6c1a03b4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.223277   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key ...
	I0520 10:38:41.223292   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key: {Name:mk43f8b6ad1c636cb66aeb447dbd30dc186c0027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.223393   22116 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.6b5de75a
	I0520 10:38:41.223416   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.6b5de75a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.254]
	I0520 10:38:41.344455   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.6b5de75a ...
	I0520 10:38:41.344485   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.6b5de75a: {Name:mk7f061b81223a9de0b98a169f7fe69f29e0d9ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.344656   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.6b5de75a ...
	I0520 10:38:41.344673   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.6b5de75a: {Name:mkf45a4fcfce8e7a84544d6dc139989a07e13978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.344767   22116 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.6b5de75a -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt
	I0520 10:38:41.344858   22116 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.6b5de75a -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key
	I0520 10:38:41.344951   22116 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key
	I0520 10:38:41.344986   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt with IP's: []
	I0520 10:38:41.400685   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt ...
	I0520 10:38:41.400717   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt: {Name:mk7c4fb26d0612e4af4a2f45c3a0baf1b9961239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.400880   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key ...
	I0520 10:38:41.400895   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key: {Name:mka17d7cd9dd870412246eda6fe95b4c8febc0ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:38:41.401008   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 10:38:41.401030   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 10:38:41.401046   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 10:38:41.401080   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 10:38:41.401099   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 10:38:41.401118   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 10:38:41.401145   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 10:38:41.401162   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 10:38:41.401224   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 10:38:41.401269   22116 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 10:38:41.401282   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 10:38:41.401316   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:38:41.401348   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:38:41.401385   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 10:38:41.401445   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:38:41.401483   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:38:41.401504   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem -> /usr/share/ca-certificates/11162.pem
	I0520 10:38:41.401522   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /usr/share/ca-certificates/111622.pem
	I0520 10:38:41.402073   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:38:41.427596   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:38:41.450983   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:38:41.474449   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 10:38:41.498000   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 10:38:41.520364   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 10:38:41.543405   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:38:41.566383   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:38:41.593451   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:38:41.619857   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 10:38:41.646812   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 10:38:41.671984   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 10:38:41.687982   22116 ssh_runner.go:195] Run: openssl version
	I0520 10:38:41.693764   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:38:41.703811   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:38:41.708063   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:38:41.708097   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:38:41.713619   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:38:41.723742   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 10:38:41.733921   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 10:38:41.737971   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 10:38:41.738005   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 10:38:41.743488   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 10:38:41.753553   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 10:38:41.763559   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 10:38:41.767848   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 10:38:41.767888   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 10:38:41.773219   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 10:38:41.783313   22116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:38:41.787579   22116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 10:38:41.787622   22116 kubeadm.go:391] StartCluster: {Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:38:41.787689   22116 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 10:38:41.787726   22116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 10:38:41.824726   22116 cri.go:89] found id: ""
	I0520 10:38:41.824793   22116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 10:38:41.838012   22116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 10:38:41.855437   22116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 10:38:41.870445   22116 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 10:38:41.870466   22116 kubeadm.go:156] found existing configuration files:
	
	I0520 10:38:41.870512   22116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 10:38:41.882116   22116 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 10:38:41.882175   22116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 10:38:41.894722   22116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 10:38:41.907082   22116 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 10:38:41.907127   22116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 10:38:41.915796   22116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 10:38:41.924057   22116 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 10:38:41.924108   22116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 10:38:41.932650   22116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 10:38:41.940875   22116 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 10:38:41.940922   22116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 10:38:41.949797   22116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 10:38:42.178096   22116 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 10:38:53.283971   22116 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 10:38:53.284019   22116 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 10:38:53.284100   22116 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 10:38:53.284212   22116 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 10:38:53.284326   22116 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 10:38:53.284414   22116 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 10:38:53.286145   22116 out.go:204]   - Generating certificates and keys ...
	I0520 10:38:53.286237   22116 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 10:38:53.286302   22116 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 10:38:53.286394   22116 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 10:38:53.286481   22116 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 10:38:53.286565   22116 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 10:38:53.286633   22116 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 10:38:53.286755   22116 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 10:38:53.286931   22116 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-570850 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I0520 10:38:53.287009   22116 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 10:38:53.287170   22116 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-570850 localhost] and IPs [192.168.39.152 127.0.0.1 ::1]
	I0520 10:38:53.287264   22116 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 10:38:53.287353   22116 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 10:38:53.287415   22116 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 10:38:53.287490   22116 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 10:38:53.287563   22116 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 10:38:53.287643   22116 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 10:38:53.287711   22116 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 10:38:53.287819   22116 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 10:38:53.287869   22116 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 10:38:53.287937   22116 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 10:38:53.287998   22116 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 10:38:53.289698   22116 out.go:204]   - Booting up control plane ...
	I0520 10:38:53.289786   22116 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 10:38:53.289861   22116 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 10:38:53.289916   22116 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 10:38:53.290002   22116 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 10:38:53.290117   22116 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 10:38:53.290188   22116 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 10:38:53.290351   22116 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 10:38:53.290449   22116 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 10:38:53.290525   22116 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002313385s
	I0520 10:38:53.290610   22116 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 10:38:53.290687   22116 kubeadm.go:309] [api-check] The API server is healthy after 5.6149036s
	I0520 10:38:53.290826   22116 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 10:38:53.291001   22116 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 10:38:53.291060   22116 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 10:38:53.291220   22116 kubeadm.go:309] [mark-control-plane] Marking the node ha-570850 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 10:38:53.291269   22116 kubeadm.go:309] [bootstrap-token] Using token: d9nkx9.0xat5xp4yeo1kymg
	I0520 10:38:53.292564   22116 out.go:204]   - Configuring RBAC rules ...
	I0520 10:38:53.292663   22116 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 10:38:53.292796   22116 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 10:38:53.292908   22116 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 10:38:53.293073   22116 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 10:38:53.293205   22116 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 10:38:53.293281   22116 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 10:38:53.293375   22116 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 10:38:53.293432   22116 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 10:38:53.293476   22116 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 10:38:53.293485   22116 kubeadm.go:309] 
	I0520 10:38:53.293551   22116 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 10:38:53.293560   22116 kubeadm.go:309] 
	I0520 10:38:53.293647   22116 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 10:38:53.293661   22116 kubeadm.go:309] 
	I0520 10:38:53.293712   22116 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 10:38:53.293789   22116 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 10:38:53.293866   22116 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 10:38:53.293880   22116 kubeadm.go:309] 
	I0520 10:38:53.293975   22116 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 10:38:53.293993   22116 kubeadm.go:309] 
	I0520 10:38:53.294030   22116 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 10:38:53.294041   22116 kubeadm.go:309] 
	I0520 10:38:53.294095   22116 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 10:38:53.294170   22116 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 10:38:53.294238   22116 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 10:38:53.294244   22116 kubeadm.go:309] 
	I0520 10:38:53.294316   22116 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 10:38:53.294403   22116 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 10:38:53.294413   22116 kubeadm.go:309] 
	I0520 10:38:53.294524   22116 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token d9nkx9.0xat5xp4yeo1kymg \
	I0520 10:38:53.294646   22116 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 10:38:53.294678   22116 kubeadm.go:309] 	--control-plane 
	I0520 10:38:53.294687   22116 kubeadm.go:309] 
	I0520 10:38:53.294784   22116 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 10:38:53.294793   22116 kubeadm.go:309] 
	I0520 10:38:53.294861   22116 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token d9nkx9.0xat5xp4yeo1kymg \
	I0520 10:38:53.294961   22116 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 10:38:53.294974   22116 cni.go:84] Creating CNI manager for ""
	I0520 10:38:53.294984   22116 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0520 10:38:53.296515   22116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 10:38:53.297698   22116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 10:38:53.303134   22116 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 10:38:53.303150   22116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 10:38:53.324827   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 10:38:53.702667   22116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 10:38:53.702813   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:53.702821   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-570850 minikube.k8s.io/updated_at=2024_05_20T10_38_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=ha-570850 minikube.k8s.io/primary=true
	I0520 10:38:53.732556   22116 ops.go:34] apiserver oom_adj: -16
	I0520 10:38:53.838256   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:54.338854   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:54.839166   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:55.338668   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:55.838915   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:56.338596   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:56.839322   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:57.338462   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:57.838671   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:58.338318   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:58.839182   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:59.339292   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:38:59.838564   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:00.338310   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:00.838519   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:01.338903   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:01.839101   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:02.338923   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:02.839124   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:03.339212   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:03.838389   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:04.339186   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:04.838759   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:05.339169   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:05.838272   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:39:05.955390   22116 kubeadm.go:1107] duration metric: took 12.252643948s to wait for elevateKubeSystemPrivileges
	W0520 10:39:05.955437   22116 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 10:39:05.955449   22116 kubeadm.go:393] duration metric: took 24.167826538s to StartCluster
	I0520 10:39:05.955469   22116 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:39:05.955559   22116 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:39:05.956482   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:39:05.956720   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 10:39:05.956731   22116 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:39:05.956755   22116 start.go:240] waiting for startup goroutines ...
	I0520 10:39:05.956770   22116 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 10:39:05.956858   22116 addons.go:69] Setting storage-provisioner=true in profile "ha-570850"
	I0520 10:39:05.956873   22116 addons.go:69] Setting default-storageclass=true in profile "ha-570850"
	I0520 10:39:05.956887   22116 addons.go:234] Setting addon storage-provisioner=true in "ha-570850"
	I0520 10:39:05.956912   22116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-570850"
	I0520 10:39:05.956922   22116 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:39:05.956981   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:39:05.957340   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:05.957364   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:05.957380   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:05.957392   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:05.972241   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I0520 10:39:05.972433   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I0520 10:39:05.972669   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:05.972906   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:05.973218   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:05.973236   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:05.973413   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:05.973433   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:05.973597   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:05.973738   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:05.973778   22116 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:39:05.974233   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:05.974277   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:05.976055   22116 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:39:05.976397   22116 kapi.go:59] client config for ha-570850: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.crt", KeyFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key", CAFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 10:39:05.976962   22116 cert_rotation.go:137] Starting client certificate rotation controller
	I0520 10:39:05.977226   22116 addons.go:234] Setting addon default-storageclass=true in "ha-570850"
	I0520 10:39:05.977269   22116 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:39:05.977643   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:05.977688   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:05.991328   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43073
	I0520 10:39:05.991976   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:05.992066   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41693
	I0520 10:39:05.992512   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:05.992530   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:05.992568   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:05.992916   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:05.992986   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:05.993003   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:05.993100   22116 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:39:05.993409   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:05.993973   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:05.994016   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:05.994785   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:39:05.996558   22116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 10:39:05.997940   22116 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:39:05.997957   22116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 10:39:05.997974   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:39:06.000388   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:39:06.000750   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:39:06.000817   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:39:06.001075   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:39:06.001264   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:39:06.001397   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:39:06.001523   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:39:06.009670   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42821
	I0520 10:39:06.010031   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:06.010441   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:06.010462   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:06.010748   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:06.010932   22116 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:39:06.012271   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:39:06.012451   22116 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 10:39:06.012463   22116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 10:39:06.012475   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:39:06.015176   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:39:06.015460   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:39:06.015520   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:39:06.015708   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:39:06.015952   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:39:06.016110   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:39:06.016242   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:39:06.085059   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 10:39:06.143593   22116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:39:06.204380   22116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 10:39:06.614259   22116 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 10:39:06.948366   22116 main.go:141] libmachine: Making call to close driver server
	I0520 10:39:06.948394   22116 main.go:141] libmachine: (ha-570850) Calling .Close
	I0520 10:39:06.948410   22116 main.go:141] libmachine: Making call to close driver server
	I0520 10:39:06.948426   22116 main.go:141] libmachine: (ha-570850) Calling .Close
	I0520 10:39:06.948785   22116 main.go:141] libmachine: (ha-570850) DBG | Closing plugin on server side
	I0520 10:39:06.948790   22116 main.go:141] libmachine: (ha-570850) DBG | Closing plugin on server side
	I0520 10:39:06.948788   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:39:06.948808   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:39:06.948824   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:39:06.948832   22116 main.go:141] libmachine: Making call to close driver server
	I0520 10:39:06.948842   22116 main.go:141] libmachine: (ha-570850) Calling .Close
	I0520 10:39:06.948808   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:39:06.948938   22116 main.go:141] libmachine: Making call to close driver server
	I0520 10:39:06.948956   22116 main.go:141] libmachine: (ha-570850) Calling .Close
	I0520 10:39:06.949078   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:39:06.949286   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:39:06.949109   22116 main.go:141] libmachine: (ha-570850) DBG | Closing plugin on server side
	I0520 10:39:06.949133   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:39:06.949433   22116 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0520 10:39:06.949446   22116 round_trippers.go:469] Request Headers:
	I0520 10:39:06.949457   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:39:06.949464   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:39:06.949437   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:39:06.949204   22116 main.go:141] libmachine: (ha-570850) DBG | Closing plugin on server side
	I0520 10:39:06.969710   22116 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0520 10:39:06.970478   22116 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0520 10:39:06.970501   22116 round_trippers.go:469] Request Headers:
	I0520 10:39:06.970512   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:39:06.970518   22116 round_trippers.go:473]     Content-Type: application/json
	I0520 10:39:06.970524   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:39:06.973399   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:39:06.973750   22116 main.go:141] libmachine: Making call to close driver server
	I0520 10:39:06.973765   22116 main.go:141] libmachine: (ha-570850) Calling .Close
	I0520 10:39:06.974108   22116 main.go:141] libmachine: (ha-570850) DBG | Closing plugin on server side
	I0520 10:39:06.974127   22116 main.go:141] libmachine: Successfully made call to close driver server
	I0520 10:39:06.974164   22116 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 10:39:06.976462   22116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 10:39:06.978014   22116 addons.go:505] duration metric: took 1.021245908s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 10:39:06.978070   22116 start.go:245] waiting for cluster config update ...
	I0520 10:39:06.978087   22116 start.go:254] writing updated cluster config ...
	I0520 10:39:06.979981   22116 out.go:177] 
	I0520 10:39:06.982195   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:39:06.982294   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:39:06.984294   22116 out.go:177] * Starting "ha-570850-m02" control-plane node in "ha-570850" cluster
	I0520 10:39:06.985971   22116 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:39:06.985999   22116 cache.go:56] Caching tarball of preloaded images
	I0520 10:39:06.986140   22116 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 10:39:06.986157   22116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:39:06.986253   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:39:06.986480   22116 start.go:360] acquireMachinesLock for ha-570850-m02: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 10:39:06.986544   22116 start.go:364] duration metric: took 37.407µs to acquireMachinesLock for "ha-570850-m02"
	I0520 10:39:06.986567   22116 start.go:93] Provisioning new machine with config: &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:39:06.986673   22116 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0520 10:39:06.988380   22116 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 10:39:06.988492   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:06.988534   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:07.003535   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0520 10:39:07.004005   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:07.004467   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:07.004487   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:07.004753   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:07.004960   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetMachineName
	I0520 10:39:07.005122   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:07.005278   22116 start.go:159] libmachine.API.Create for "ha-570850" (driver="kvm2")
	I0520 10:39:07.005307   22116 client.go:168] LocalClient.Create starting
	I0520 10:39:07.005342   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 10:39:07.005379   22116 main.go:141] libmachine: Decoding PEM data...
	I0520 10:39:07.005396   22116 main.go:141] libmachine: Parsing certificate...
	I0520 10:39:07.005457   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 10:39:07.005480   22116 main.go:141] libmachine: Decoding PEM data...
	I0520 10:39:07.005496   22116 main.go:141] libmachine: Parsing certificate...
	I0520 10:39:07.005535   22116 main.go:141] libmachine: Running pre-create checks...
	I0520 10:39:07.005548   22116 main.go:141] libmachine: (ha-570850-m02) Calling .PreCreateCheck
	I0520 10:39:07.005706   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetConfigRaw
	I0520 10:39:07.006043   22116 main.go:141] libmachine: Creating machine...
	I0520 10:39:07.006058   22116 main.go:141] libmachine: (ha-570850-m02) Calling .Create
	I0520 10:39:07.006170   22116 main.go:141] libmachine: (ha-570850-m02) Creating KVM machine...
	I0520 10:39:07.007358   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found existing default KVM network
	I0520 10:39:07.007519   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found existing private KVM network mk-ha-570850
	I0520 10:39:07.007605   22116 main.go:141] libmachine: (ha-570850-m02) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02 ...
	I0520 10:39:07.007639   22116 main.go:141] libmachine: (ha-570850-m02) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 10:39:07.007695   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:07.007608   22518 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:39:07.007788   22116 main.go:141] libmachine: (ha-570850-m02) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 10:39:07.222352   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:07.222226   22518 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa...
	I0520 10:39:07.538827   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:07.538683   22518 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/ha-570850-m02.rawdisk...
	I0520 10:39:07.538865   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Writing magic tar header
	I0520 10:39:07.538881   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Writing SSH key tar header
	I0520 10:39:07.538893   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:07.538838   22518 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02 ...
	I0520 10:39:07.539023   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02
	I0520 10:39:07.539049   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 10:39:07.539063   22116 main.go:141] libmachine: (ha-570850-m02) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02 (perms=drwx------)
	I0520 10:39:07.539079   22116 main.go:141] libmachine: (ha-570850-m02) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 10:39:07.539088   22116 main.go:141] libmachine: (ha-570850-m02) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 10:39:07.539098   22116 main.go:141] libmachine: (ha-570850-m02) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 10:39:07.539106   22116 main.go:141] libmachine: (ha-570850-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 10:39:07.539122   22116 main.go:141] libmachine: (ha-570850-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 10:39:07.539141   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:39:07.539152   22116 main.go:141] libmachine: (ha-570850-m02) Creating domain...
	I0520 10:39:07.539167   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 10:39:07.539175   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 10:39:07.539198   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home/jenkins
	I0520 10:39:07.539206   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Checking permissions on dir: /home
	I0520 10:39:07.539212   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Skipping /home - not owner
	I0520 10:39:07.540358   22116 main.go:141] libmachine: (ha-570850-m02) define libvirt domain using xml: 
	I0520 10:39:07.540381   22116 main.go:141] libmachine: (ha-570850-m02) <domain type='kvm'>
	I0520 10:39:07.540390   22116 main.go:141] libmachine: (ha-570850-m02)   <name>ha-570850-m02</name>
	I0520 10:39:07.540399   22116 main.go:141] libmachine: (ha-570850-m02)   <memory unit='MiB'>2200</memory>
	I0520 10:39:07.540405   22116 main.go:141] libmachine: (ha-570850-m02)   <vcpu>2</vcpu>
	I0520 10:39:07.540415   22116 main.go:141] libmachine: (ha-570850-m02)   <features>
	I0520 10:39:07.540420   22116 main.go:141] libmachine: (ha-570850-m02)     <acpi/>
	I0520 10:39:07.540430   22116 main.go:141] libmachine: (ha-570850-m02)     <apic/>
	I0520 10:39:07.540434   22116 main.go:141] libmachine: (ha-570850-m02)     <pae/>
	I0520 10:39:07.540442   22116 main.go:141] libmachine: (ha-570850-m02)     
	I0520 10:39:07.540472   22116 main.go:141] libmachine: (ha-570850-m02)   </features>
	I0520 10:39:07.540495   22116 main.go:141] libmachine: (ha-570850-m02)   <cpu mode='host-passthrough'>
	I0520 10:39:07.540509   22116 main.go:141] libmachine: (ha-570850-m02)   
	I0520 10:39:07.540519   22116 main.go:141] libmachine: (ha-570850-m02)   </cpu>
	I0520 10:39:07.540533   22116 main.go:141] libmachine: (ha-570850-m02)   <os>
	I0520 10:39:07.540540   22116 main.go:141] libmachine: (ha-570850-m02)     <type>hvm</type>
	I0520 10:39:07.540550   22116 main.go:141] libmachine: (ha-570850-m02)     <boot dev='cdrom'/>
	I0520 10:39:07.540557   22116 main.go:141] libmachine: (ha-570850-m02)     <boot dev='hd'/>
	I0520 10:39:07.540578   22116 main.go:141] libmachine: (ha-570850-m02)     <bootmenu enable='no'/>
	I0520 10:39:07.540597   22116 main.go:141] libmachine: (ha-570850-m02)   </os>
	I0520 10:39:07.540607   22116 main.go:141] libmachine: (ha-570850-m02)   <devices>
	I0520 10:39:07.540621   22116 main.go:141] libmachine: (ha-570850-m02)     <disk type='file' device='cdrom'>
	I0520 10:39:07.540634   22116 main.go:141] libmachine: (ha-570850-m02)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/boot2docker.iso'/>
	I0520 10:39:07.540641   22116 main.go:141] libmachine: (ha-570850-m02)       <target dev='hdc' bus='scsi'/>
	I0520 10:39:07.540646   22116 main.go:141] libmachine: (ha-570850-m02)       <readonly/>
	I0520 10:39:07.540653   22116 main.go:141] libmachine: (ha-570850-m02)     </disk>
	I0520 10:39:07.540659   22116 main.go:141] libmachine: (ha-570850-m02)     <disk type='file' device='disk'>
	I0520 10:39:07.540667   22116 main.go:141] libmachine: (ha-570850-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 10:39:07.540676   22116 main.go:141] libmachine: (ha-570850-m02)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/ha-570850-m02.rawdisk'/>
	I0520 10:39:07.540683   22116 main.go:141] libmachine: (ha-570850-m02)       <target dev='hda' bus='virtio'/>
	I0520 10:39:07.540689   22116 main.go:141] libmachine: (ha-570850-m02)     </disk>
	I0520 10:39:07.540694   22116 main.go:141] libmachine: (ha-570850-m02)     <interface type='network'>
	I0520 10:39:07.540720   22116 main.go:141] libmachine: (ha-570850-m02)       <source network='mk-ha-570850'/>
	I0520 10:39:07.540766   22116 main.go:141] libmachine: (ha-570850-m02)       <model type='virtio'/>
	I0520 10:39:07.540780   22116 main.go:141] libmachine: (ha-570850-m02)     </interface>
	I0520 10:39:07.540787   22116 main.go:141] libmachine: (ha-570850-m02)     <interface type='network'>
	I0520 10:39:07.540800   22116 main.go:141] libmachine: (ha-570850-m02)       <source network='default'/>
	I0520 10:39:07.540808   22116 main.go:141] libmachine: (ha-570850-m02)       <model type='virtio'/>
	I0520 10:39:07.540819   22116 main.go:141] libmachine: (ha-570850-m02)     </interface>
	I0520 10:39:07.540829   22116 main.go:141] libmachine: (ha-570850-m02)     <serial type='pty'>
	I0520 10:39:07.540839   22116 main.go:141] libmachine: (ha-570850-m02)       <target port='0'/>
	I0520 10:39:07.540851   22116 main.go:141] libmachine: (ha-570850-m02)     </serial>
	I0520 10:39:07.540861   22116 main.go:141] libmachine: (ha-570850-m02)     <console type='pty'>
	I0520 10:39:07.540877   22116 main.go:141] libmachine: (ha-570850-m02)       <target type='serial' port='0'/>
	I0520 10:39:07.540888   22116 main.go:141] libmachine: (ha-570850-m02)     </console>
	I0520 10:39:07.540897   22116 main.go:141] libmachine: (ha-570850-m02)     <rng model='virtio'>
	I0520 10:39:07.540908   22116 main.go:141] libmachine: (ha-570850-m02)       <backend model='random'>/dev/random</backend>
	I0520 10:39:07.540918   22116 main.go:141] libmachine: (ha-570850-m02)     </rng>
	I0520 10:39:07.540945   22116 main.go:141] libmachine: (ha-570850-m02)     
	I0520 10:39:07.540954   22116 main.go:141] libmachine: (ha-570850-m02)     
	I0520 10:39:07.540994   22116 main.go:141] libmachine: (ha-570850-m02)   </devices>
	I0520 10:39:07.541017   22116 main.go:141] libmachine: (ha-570850-m02) </domain>
	I0520 10:39:07.541045   22116 main.go:141] libmachine: (ha-570850-m02) 
	I0520 10:39:07.548277   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:85:10:a4 in network default
	I0520 10:39:07.548893   22116 main.go:141] libmachine: (ha-570850-m02) Ensuring networks are active...
	I0520 10:39:07.548911   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:07.549679   22116 main.go:141] libmachine: (ha-570850-m02) Ensuring network default is active
	I0520 10:39:07.550054   22116 main.go:141] libmachine: (ha-570850-m02) Ensuring network mk-ha-570850 is active
	I0520 10:39:07.550404   22116 main.go:141] libmachine: (ha-570850-m02) Getting domain xml...
	I0520 10:39:07.551042   22116 main.go:141] libmachine: (ha-570850-m02) Creating domain...
	I0520 10:39:08.760553   22116 main.go:141] libmachine: (ha-570850-m02) Waiting to get IP...
	I0520 10:39:08.761403   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:08.761875   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:08.761928   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:08.761845   22518 retry.go:31] will retry after 308.455948ms: waiting for machine to come up
	I0520 10:39:09.072591   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:09.073264   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:09.073293   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:09.073213   22518 retry.go:31] will retry after 383.205669ms: waiting for machine to come up
	I0520 10:39:09.458510   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:09.459020   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:09.459068   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:09.458988   22518 retry.go:31] will retry after 396.48102ms: waiting for machine to come up
	I0520 10:39:09.856507   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:09.856981   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:09.857008   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:09.856941   22518 retry.go:31] will retry after 553.948365ms: waiting for machine to come up
	I0520 10:39:10.412589   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:10.413029   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:10.413073   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:10.413002   22518 retry.go:31] will retry after 526.539632ms: waiting for machine to come up
	I0520 10:39:10.940743   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:10.941226   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:10.941254   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:10.941165   22518 retry.go:31] will retry after 794.865404ms: waiting for machine to come up
	I0520 10:39:11.738030   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:11.738544   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:11.738570   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:11.738477   22518 retry.go:31] will retry after 1.0140341s: waiting for machine to come up
	I0520 10:39:12.753637   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:12.754035   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:12.754095   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:12.754026   22518 retry.go:31] will retry after 1.031388488s: waiting for machine to come up
	I0520 10:39:13.787382   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:13.787765   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:13.787803   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:13.787732   22518 retry.go:31] will retry after 1.73196699s: waiting for machine to come up
	I0520 10:39:15.521538   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:15.521914   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:15.521936   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:15.521886   22518 retry.go:31] will retry after 2.161679329s: waiting for machine to come up
	I0520 10:39:17.685849   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:17.686404   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:17.686432   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:17.686344   22518 retry.go:31] will retry after 2.721641433s: waiting for machine to come up
	I0520 10:39:20.411165   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:20.411496   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:20.411522   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:20.411468   22518 retry.go:31] will retry after 2.809218862s: waiting for machine to come up
	I0520 10:39:23.222165   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:23.222465   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find current IP address of domain ha-570850-m02 in network mk-ha-570850
	I0520 10:39:23.222500   22116 main.go:141] libmachine: (ha-570850-m02) DBG | I0520 10:39:23.222424   22518 retry.go:31] will retry after 4.068354924s: waiting for machine to come up
	I0520 10:39:27.292423   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:27.292962   22116 main.go:141] libmachine: (ha-570850-m02) Found IP for machine: 192.168.39.111
	I0520 10:39:27.292991   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has current primary IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:27.293001   22116 main.go:141] libmachine: (ha-570850-m02) Reserving static IP address...
	I0520 10:39:27.293329   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find host DHCP lease matching {name: "ha-570850-m02", mac: "52:54:00:02:04:25", ip: "192.168.39.111"} in network mk-ha-570850
	I0520 10:39:27.362808   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Getting to WaitForSSH function...
	I0520 10:39:27.362838   22116 main.go:141] libmachine: (ha-570850-m02) Reserved static IP address: 192.168.39.111
	I0520 10:39:27.362851   22116 main.go:141] libmachine: (ha-570850-m02) Waiting for SSH to be available...
	I0520 10:39:27.365524   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:27.365868   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850
	I0520 10:39:27.365912   22116 main.go:141] libmachine: (ha-570850-m02) DBG | unable to find defined IP address of network mk-ha-570850 interface with MAC address 52:54:00:02:04:25
	I0520 10:39:27.366107   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Using SSH client type: external
	I0520 10:39:27.366126   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa (-rw-------)
	I0520 10:39:27.366174   22116 main.go:141] libmachine: (ha-570850-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 10:39:27.366192   22116 main.go:141] libmachine: (ha-570850-m02) DBG | About to run SSH command:
	I0520 10:39:27.366204   22116 main.go:141] libmachine: (ha-570850-m02) DBG | exit 0
	I0520 10:39:27.369640   22116 main.go:141] libmachine: (ha-570850-m02) DBG | SSH cmd err, output: exit status 255: 
	I0520 10:39:27.369661   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0520 10:39:27.369670   22116 main.go:141] libmachine: (ha-570850-m02) DBG | command : exit 0
	I0520 10:39:27.369678   22116 main.go:141] libmachine: (ha-570850-m02) DBG | err     : exit status 255
	I0520 10:39:27.369689   22116 main.go:141] libmachine: (ha-570850-m02) DBG | output  : 
	I0520 10:39:30.371855   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Getting to WaitForSSH function...
	I0520 10:39:30.374273   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.374696   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.374725   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.374812   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Using SSH client type: external
	I0520 10:39:30.374835   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa (-rw-------)
	I0520 10:39:30.374866   22116 main.go:141] libmachine: (ha-570850-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 10:39:30.374888   22116 main.go:141] libmachine: (ha-570850-m02) DBG | About to run SSH command:
	I0520 10:39:30.374896   22116 main.go:141] libmachine: (ha-570850-m02) DBG | exit 0
	I0520 10:39:30.501001   22116 main.go:141] libmachine: (ha-570850-m02) DBG | SSH cmd err, output: <nil>: 
	I0520 10:39:30.501285   22116 main.go:141] libmachine: (ha-570850-m02) KVM machine creation complete!
	I0520 10:39:30.501636   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetConfigRaw
	I0520 10:39:30.502176   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:30.502377   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:30.502502   22116 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 10:39:30.502518   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:39:30.503643   22116 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 10:39:30.503660   22116 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 10:39:30.503667   22116 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 10:39:30.503675   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:30.505760   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.506063   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.506087   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.506220   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:30.506410   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.506565   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.506746   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:30.506928   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:39:30.507140   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0520 10:39:30.507151   22116 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 10:39:30.612174   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:39:30.612194   22116 main.go:141] libmachine: Detecting the provisioner...
	I0520 10:39:30.612201   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:30.614747   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.615126   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.615152   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.615284   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:30.615490   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.615661   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.615802   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:30.615986   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:39:30.616154   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0520 10:39:30.616164   22116 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 10:39:30.725354   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 10:39:30.725456   22116 main.go:141] libmachine: found compatible host: buildroot
	I0520 10:39:30.725468   22116 main.go:141] libmachine: Provisioning with buildroot...
	I0520 10:39:30.725479   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetMachineName
	I0520 10:39:30.725703   22116 buildroot.go:166] provisioning hostname "ha-570850-m02"
	I0520 10:39:30.725724   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetMachineName
	I0520 10:39:30.725936   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:30.728383   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.728766   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.728790   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.728967   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:30.729133   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.729289   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.729424   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:30.729594   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:39:30.729763   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0520 10:39:30.729784   22116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-570850-m02 && echo "ha-570850-m02" | sudo tee /etc/hostname
	I0520 10:39:30.851186   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850-m02
	
	I0520 10:39:30.851214   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:30.853747   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.854016   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.854040   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.854209   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:30.854381   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.854497   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:30.854615   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:30.854749   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:39:30.854960   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0520 10:39:30.854984   22116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-570850-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-570850-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-570850-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:39:30.969623   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:39:30.969651   22116 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 10:39:30.969686   22116 buildroot.go:174] setting up certificates
	I0520 10:39:30.969698   22116 provision.go:84] configureAuth start
	I0520 10:39:30.969712   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetMachineName
	I0520 10:39:30.970030   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:39:30.972733   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.973127   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.973151   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.973285   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:30.975420   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.975834   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:30.975855   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:30.975975   22116 provision.go:143] copyHostCerts
	I0520 10:39:30.976007   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:39:30.976039   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 10:39:30.976061   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:39:30.976128   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 10:39:30.976205   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:39:30.976221   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 10:39:30.976228   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:39:30.976251   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 10:39:30.976301   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:39:30.976317   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 10:39:30.976323   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:39:30.976342   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 10:39:30.976398   22116 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.ha-570850-m02 san=[127.0.0.1 192.168.39.111 ha-570850-m02 localhost minikube]
	I0520 10:39:31.130245   22116 provision.go:177] copyRemoteCerts
	I0520 10:39:31.130300   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:39:31.130323   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:31.133078   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.133415   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.133441   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.133607   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:31.133919   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.134080   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:31.134226   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	I0520 10:39:31.223915   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 10:39:31.224018   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:39:31.247409   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 10:39:31.247466   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 10:39:31.272356   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 10:39:31.272416   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 10:39:31.297579   22116 provision.go:87] duration metric: took 327.868675ms to configureAuth
	I0520 10:39:31.297608   22116 buildroot.go:189] setting minikube options for container-runtime
	I0520 10:39:31.297817   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:39:31.297914   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:31.300270   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.300976   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.301021   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.301281   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:31.301663   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.301943   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.302207   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:31.302343   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:39:31.302550   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0520 10:39:31.302572   22116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:39:31.596508   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:39:31.596529   22116 main.go:141] libmachine: Checking connection to Docker...
	I0520 10:39:31.596536   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetURL
	I0520 10:39:31.597741   22116 main.go:141] libmachine: (ha-570850-m02) DBG | Using libvirt version 6000000
	I0520 10:39:31.599832   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.600219   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.600249   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.600414   22116 main.go:141] libmachine: Docker is up and running!
	I0520 10:39:31.600433   22116 main.go:141] libmachine: Reticulating splines...
	I0520 10:39:31.600439   22116 client.go:171] duration metric: took 24.595123041s to LocalClient.Create
	I0520 10:39:31.600459   22116 start.go:167] duration metric: took 24.5951841s to libmachine.API.Create "ha-570850"
	I0520 10:39:31.600468   22116 start.go:293] postStartSetup for "ha-570850-m02" (driver="kvm2")
	I0520 10:39:31.600478   22116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:39:31.600503   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:31.600715   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:39:31.600735   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:31.602815   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.603128   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.603153   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.603306   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:31.603461   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.603577   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:31.603730   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	I0520 10:39:31.689013   22116 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:39:31.693322   22116 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 10:39:31.693338   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 10:39:31.693397   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 10:39:31.693462   22116 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 10:39:31.693471   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /etc/ssl/certs/111622.pem
	I0520 10:39:31.693547   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 10:39:31.702965   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:39:31.726399   22116 start.go:296] duration metric: took 125.918505ms for postStartSetup
	I0520 10:39:31.726442   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetConfigRaw
	I0520 10:39:31.726954   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:39:31.729475   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.729762   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.729790   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.730014   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:39:31.730185   22116 start.go:128] duration metric: took 24.743500689s to createHost
	I0520 10:39:31.730205   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:31.732251   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.732574   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.732602   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.732737   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:31.732949   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.733109   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.733235   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:31.733376   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:39:31.733520   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I0520 10:39:31.733530   22116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 10:39:31.841774   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716201571.818898623
	
	I0520 10:39:31.841796   22116 fix.go:216] guest clock: 1716201571.818898623
	I0520 10:39:31.841803   22116 fix.go:229] Guest: 2024-05-20 10:39:31.818898623 +0000 UTC Remote: 2024-05-20 10:39:31.730196568 +0000 UTC m=+80.035600532 (delta=88.702055ms)
	I0520 10:39:31.841817   22116 fix.go:200] guest clock delta is within tolerance: 88.702055ms
	I0520 10:39:31.841823   22116 start.go:83] releasing machines lock for "ha-570850-m02", held for 24.855267636s
	I0520 10:39:31.841840   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:31.842065   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:39:31.844853   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.845223   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.845250   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.846987   22116 out.go:177] * Found network options:
	I0520 10:39:31.848419   22116 out.go:177]   - NO_PROXY=192.168.39.152
	W0520 10:39:31.849635   22116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 10:39:31.849662   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:31.850170   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:31.850378   22116 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:39:31.850494   22116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:39:31.850549   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	W0520 10:39:31.850580   22116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 10:39:31.850650   22116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:39:31.850670   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:39:31.853241   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.853510   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.853590   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.853614   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.853773   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:31.853879   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:31.853906   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:31.853931   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.854034   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:39:31.854107   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:31.854178   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:39:31.854231   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	I0520 10:39:31.854305   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:39:31.854407   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	I0520 10:39:32.092014   22116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 10:39:32.098694   22116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 10:39:32.098759   22116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:39:32.114429   22116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 10:39:32.114450   22116 start.go:494] detecting cgroup driver to use...
	I0520 10:39:32.114506   22116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:39:32.130116   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:39:32.143233   22116 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:39:32.143286   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:39:32.156247   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:39:32.169208   22116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:39:32.281948   22116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:39:32.447374   22116 docker.go:233] disabling docker service ...
	I0520 10:39:32.447452   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:39:32.462556   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:39:32.475137   22116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:39:32.596131   22116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:39:32.718787   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:39:32.732595   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:39:32.750542   22116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:39:32.750619   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.761208   22116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:39:32.761282   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.771691   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.781850   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.792064   22116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:39:32.803228   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.813962   22116 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.833755   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:39:32.844816   22116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:39:32.854759   22116 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 10:39:32.854814   22116 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 10:39:32.868132   22116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:39:32.877630   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:39:32.997160   22116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:39:33.132143   22116 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:39:33.132205   22116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:39:33.136711   22116 start.go:562] Will wait 60s for crictl version
	I0520 10:39:33.136761   22116 ssh_runner.go:195] Run: which crictl
	I0520 10:39:33.140321   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:39:33.181132   22116 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 10:39:33.181218   22116 ssh_runner.go:195] Run: crio --version
	I0520 10:39:33.209052   22116 ssh_runner.go:195] Run: crio --version
	I0520 10:39:33.237968   22116 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 10:39:33.239273   22116 out.go:177]   - env NO_PROXY=192.168.39.152
	I0520 10:39:33.240457   22116 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:39:33.243151   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:33.243477   22116 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:39:21 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:39:33.243497   22116 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:39:33.243689   22116 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 10:39:33.247683   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:39:33.260509   22116 mustload.go:65] Loading cluster: ha-570850
	I0520 10:39:33.260696   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:39:33.260986   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:33.261022   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:33.275277   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39745
	I0520 10:39:33.275611   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:33.276078   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:33.276095   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:33.276389   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:33.276579   22116 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:39:33.278034   22116 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:39:33.278300   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:39:33.278322   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:39:33.292610   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41971
	I0520 10:39:33.292988   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:39:33.293370   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:39:33.293390   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:39:33.293720   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:39:33.293891   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:39:33.294042   22116 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850 for IP: 192.168.39.111
	I0520 10:39:33.294053   22116 certs.go:194] generating shared ca certs ...
	I0520 10:39:33.294065   22116 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:39:33.294234   22116 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 10:39:33.294299   22116 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 10:39:33.294313   22116 certs.go:256] generating profile certs ...
	I0520 10:39:33.294404   22116 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key
	I0520 10:39:33.294434   22116 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.5385822e
	I0520 10:39:33.294454   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.5385822e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.111 192.168.39.254]
	I0520 10:39:33.433702   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.5385822e ...
	I0520 10:39:33.433727   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.5385822e: {Name:mk50e0176d866330008c6bf0741466d79579f158 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:39:33.433869   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.5385822e ...
	I0520 10:39:33.433882   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.5385822e: {Name:mkf694731ad9752cd8224b47057675ee2ef4c0f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:39:33.433955   22116 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.5385822e -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt
	I0520 10:39:33.434072   22116 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.5385822e -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key
	I0520 10:39:33.434224   22116 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key
	I0520 10:39:33.434241   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 10:39:33.434257   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 10:39:33.434271   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 10:39:33.434283   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 10:39:33.434295   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 10:39:33.434307   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 10:39:33.434320   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 10:39:33.434332   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 10:39:33.434379   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 10:39:33.434407   22116 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 10:39:33.434416   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 10:39:33.434436   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:39:33.434457   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:39:33.434478   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 10:39:33.434529   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:39:33.434561   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:39:33.434575   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem -> /usr/share/ca-certificates/11162.pem
	I0520 10:39:33.434588   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /usr/share/ca-certificates/111622.pem
	I0520 10:39:33.434618   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:39:33.437512   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:39:33.437878   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:39:33.437898   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:39:33.438079   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:39:33.438231   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:39:33.438386   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:39:33.438503   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:39:33.513250   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 10:39:33.518649   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 10:39:33.530768   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 10:39:33.534972   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0520 10:39:33.545492   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 10:39:33.549523   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 10:39:33.560108   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 10:39:33.564246   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0520 10:39:33.576032   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 10:39:33.582075   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 10:39:33.592577   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 10:39:33.599181   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0520 10:39:33.610518   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:39:33.635730   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:39:33.658738   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:39:33.682051   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 10:39:33.704864   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 10:39:33.727326   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 10:39:33.750031   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:39:33.773040   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:39:33.795743   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:39:33.818902   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 10:39:33.843003   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 10:39:33.865500   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 10:39:33.883022   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0520 10:39:33.900124   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 10:39:33.917533   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0520 10:39:33.933804   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 10:39:33.950010   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0520 10:39:33.966659   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 10:39:33.983641   22116 ssh_runner.go:195] Run: openssl version
	I0520 10:39:33.989734   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:39:34.001022   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:39:34.005480   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:39:34.005531   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:39:34.011505   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:39:34.022713   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 10:39:34.033721   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 10:39:34.038023   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 10:39:34.038066   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 10:39:34.043598   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 10:39:34.054302   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 10:39:34.066192   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 10:39:34.070488   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 10:39:34.070536   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 10:39:34.075919   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 10:39:34.086630   22116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:39:34.090432   22116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 10:39:34.090484   22116 kubeadm.go:928] updating node {m02 192.168.39.111 8443 v1.30.1 crio true true} ...
	I0520 10:39:34.090577   22116 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-570850-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:39:34.090603   22116 kube-vip.go:115] generating kube-vip config ...
	I0520 10:39:34.090639   22116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 10:39:34.108817   22116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 10:39:34.108876   22116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 10:39:34.108921   22116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:39:34.118705   22116 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 10:39:34.118758   22116 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 10:39:34.128571   22116 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 10:39:34.128595   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 10:39:34.128627   22116 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0520 10:39:34.128652   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 10:39:34.128672   22116 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0520 10:39:34.134262   22116 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 10:39:34.134287   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 10:40:06.114857   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 10:40:06.114962   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 10:40:06.120508   22116 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 10:40:06.120545   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 10:40:39.954103   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:40:39.969219   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 10:40:39.969302   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 10:40:39.973800   22116 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 10:40:39.973835   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 10:40:40.362696   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 10:40:40.373412   22116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0520 10:40:40.390213   22116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:40:40.406055   22116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 10:40:40.422300   22116 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 10:40:40.426365   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:40:40.438809   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:40:40.570970   22116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:40:40.590302   22116 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:40:40.590810   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:40:40.590864   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:40:40.605443   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45899
	I0520 10:40:40.605836   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:40:40.606447   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:40:40.606474   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:40:40.606871   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:40:40.607102   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:40:40.607285   22116 start.go:316] joinCluster: &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:40:40.607386   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 10:40:40.607408   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:40:40.610720   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:40:40.611154   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:40:40.611183   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:40:40.611313   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:40:40.611479   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:40:40.611629   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:40:40.611777   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:40:40.768415   22116 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:40:40.768472   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 98k8uo.fdec3swqm00gq5a1 --discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-570850-m02 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443"
	I0520 10:41:01.785471   22116 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 98k8uo.fdec3swqm00gq5a1 --discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-570850-m02 --control-plane --apiserver-advertise-address=192.168.39.111 --apiserver-bind-port=8443": (21.016969951s)
	I0520 10:41:01.785503   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 10:41:02.300389   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-570850-m02 minikube.k8s.io/updated_at=2024_05_20T10_41_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=ha-570850 minikube.k8s.io/primary=false
	I0520 10:41:02.438024   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-570850-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 10:41:02.629534   22116 start.go:318] duration metric: took 22.022247037s to joinCluster
	I0520 10:41:02.629609   22116 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:41:02.631461   22116 out.go:177] * Verifying Kubernetes components...
	I0520 10:41:02.629947   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:41:02.632826   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:41:02.886193   22116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:41:02.986524   22116 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:41:02.986857   22116 kapi.go:59] client config for ha-570850: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.crt", KeyFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key", CAFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 10:41:02.986938   22116 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.152:8443
	I0520 10:41:02.987194   22116 node_ready.go:35] waiting up to 6m0s for node "ha-570850-m02" to be "Ready" ...
	I0520 10:41:02.987280   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:02.987289   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:02.987300   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:02.987306   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:02.999872   22116 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0520 10:41:03.488109   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:03.488131   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:03.488140   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:03.488145   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:03.491335   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:03.987444   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:03.987463   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:03.987471   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:03.987476   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:03.991307   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:04.487477   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:04.487499   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:04.487506   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:04.487509   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:04.491467   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:04.987667   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:04.987693   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:04.987704   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:04.987711   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:04.991266   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:04.992173   22116 node_ready.go:53] node "ha-570850-m02" has status "Ready":"False"
	I0520 10:41:05.487468   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:05.487490   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:05.487498   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:05.487503   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:05.490924   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:05.988316   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:05.988344   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:05.988354   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:05.988363   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:05.991871   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:06.488036   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:06.488061   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:06.488074   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:06.488080   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:06.492192   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:41:06.988297   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:06.988321   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:06.988330   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:06.988334   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:06.991458   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:06.992306   22116 node_ready.go:53] node "ha-570850-m02" has status "Ready":"False"
	I0520 10:41:07.487624   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:07.487645   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:07.487653   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:07.487656   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:07.495416   22116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 10:41:07.987398   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:07.987421   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:07.987428   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:07.987433   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:07.990508   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:08.488365   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:08.488387   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:08.488396   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:08.488400   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:08.491732   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:08.987824   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:08.987848   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:08.987857   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:08.987861   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:08.991307   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:09.487360   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:09.487384   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:09.487393   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:09.487399   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:09.490706   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:09.491633   22116 node_ready.go:53] node "ha-570850-m02" has status "Ready":"False"
	I0520 10:41:09.987744   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:09.987774   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:09.987782   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:09.987785   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:09.996145   22116 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0520 10:41:10.488253   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:10.488275   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:10.488285   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:10.488291   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:10.491665   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:10.987847   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:10.987872   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:10.987883   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:10.987889   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:10.990906   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:10.991448   22116 node_ready.go:49] node "ha-570850-m02" has status "Ready":"True"
	I0520 10:41:10.991462   22116 node_ready.go:38] duration metric: took 8.004248686s for node "ha-570850-m02" to be "Ready" ...
	I0520 10:41:10.991470   22116 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:41:10.991532   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:41:10.991542   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:10.991549   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:10.991552   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:10.996215   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:41:11.001955   22116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jw77g" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.002039   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jw77g
	I0520 10:41:11.002050   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.002057   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.002067   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.004771   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.005380   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:11.005395   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.005401   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.005404   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.007919   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.008395   22116 pod_ready.go:92] pod "coredns-7db6d8ff4d-jw77g" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:11.008409   22116 pod_ready.go:81] duration metric: took 6.432561ms for pod "coredns-7db6d8ff4d-jw77g" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.008416   22116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xr965" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.008456   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xr965
	I0520 10:41:11.008463   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.008470   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.008476   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.011121   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.011647   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:11.011661   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.011668   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.011671   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.013872   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.014303   22116 pod_ready.go:92] pod "coredns-7db6d8ff4d-xr965" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:11.014318   22116 pod_ready.go:81] duration metric: took 5.893335ms for pod "coredns-7db6d8ff4d-xr965" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.014327   22116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.014365   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850
	I0520 10:41:11.014372   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.014378   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.014382   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.016490   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.017031   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:11.017043   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.017055   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.017059   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.019061   22116 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0520 10:41:11.019599   22116 pod_ready.go:92] pod "etcd-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:11.019616   22116 pod_ready.go:81] duration metric: took 5.284495ms for pod "etcd-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.019623   22116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:11.019662   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:11.019669   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.019677   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.019681   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.022317   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.022962   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:11.022979   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.022989   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.022995   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.025524   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:11.520515   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:11.520536   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.520543   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.520546   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.523892   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:11.524481   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:11.524494   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:11.524500   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:11.524504   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:11.527190   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:12.020783   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:12.020817   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:12.020828   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:12.020833   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:12.023936   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:12.024469   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:12.024484   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:12.024491   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:12.024495   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:12.027017   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:12.520532   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:12.520557   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:12.520565   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:12.520570   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:12.523743   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:12.524556   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:12.524571   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:12.524580   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:12.524585   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:12.527344   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:13.020163   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:13.020183   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:13.020189   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:13.020192   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:13.023756   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:13.025120   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:13.025135   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:13.025142   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:13.025148   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:13.027603   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:13.028144   22116 pod_ready.go:102] pod "etcd-ha-570850-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 10:41:13.520507   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:13.520528   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:13.520536   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:13.520539   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:13.524399   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:13.525041   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:13.525065   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:13.525072   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:13.525075   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:13.527746   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:14.020394   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:14.020418   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:14.020428   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:14.020433   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:14.023994   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:14.024536   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:14.024549   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:14.024559   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:14.024568   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:14.027063   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:14.519855   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:14.519876   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:14.519884   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:14.519888   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:14.523369   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:14.524061   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:14.524082   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:14.524093   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:14.524097   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:14.526791   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:15.019892   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:41:15.019914   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.019922   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.019926   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.023048   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:15.023665   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:15.023679   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.023687   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.023692   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.026213   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:15.026750   22116 pod_ready.go:92] pod "etcd-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:15.026766   22116 pod_ready.go:81] duration metric: took 4.007136695s for pod "etcd-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.026779   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.026825   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850
	I0520 10:41:15.026833   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.026840   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.026844   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.029376   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:15.030222   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:15.030238   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.030249   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.030256   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.032576   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:15.033104   22116 pod_ready.go:92] pod "kube-apiserver-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:15.033120   22116 pod_ready.go:81] duration metric: took 6.335034ms for pod "kube-apiserver-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.033129   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.033172   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850-m02
	I0520 10:41:15.033180   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.033187   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.033190   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.035714   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:15.036374   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:15.036387   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.036395   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.036398   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.038750   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:15.039299   22116 pod_ready.go:92] pod "kube-apiserver-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:15.039314   22116 pod_ready.go:81] duration metric: took 6.179289ms for pod "kube-apiserver-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.039322   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.188708   22116 request.go:629] Waited for 149.327519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850
	I0520 10:41:15.188789   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850
	I0520 10:41:15.188797   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.188808   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.188814   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.192307   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:15.388608   22116 request.go:629] Waited for 195.362547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:15.388671   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:15.388678   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.388686   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.388690   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.392347   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:15.393621   22116 pod_ready.go:92] pod "kube-controller-manager-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:15.393639   22116 pod_ready.go:81] duration metric: took 354.309662ms for pod "kube-controller-manager-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.393649   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:15.588658   22116 request.go:629] Waited for 194.942384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:15.588741   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:15.588767   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.588778   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.588787   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.592186   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:15.788347   22116 request.go:629] Waited for 195.27123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:15.788423   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:15.788430   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.788441   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.788448   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.791760   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:15.988640   22116 request.go:629] Waited for 94.336306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:15.988715   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:15.988726   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:15.988746   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:15.988755   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:15.991688   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:16.188778   22116 request.go:629] Waited for 196.359348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:16.188840   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:16.188851   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:16.188862   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:16.188872   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:16.191815   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:16.394498   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:16.394540   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:16.394549   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:16.394553   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:16.398027   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:16.588304   22116 request.go:629] Waited for 189.366125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:16.588373   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:16.588381   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:16.588388   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:16.588396   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:16.590853   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:16.894322   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:16.894342   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:16.894349   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:16.894353   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:16.897963   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:16.988073   22116 request.go:629] Waited for 89.246261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:16.988133   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:16.988139   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:16.988146   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:16.988149   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:16.991229   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:17.394611   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:17.394632   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:17.394639   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:17.394641   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:17.397561   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:17.398153   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:17.398167   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:17.398174   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:17.398182   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:17.400905   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:17.401501   22116 pod_ready.go:102] pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace has status "Ready":"False"
	I0520 10:41:17.893960   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:41:17.893987   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:17.893998   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:17.894003   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:17.898131   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:41:17.899794   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:17.899810   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:17.899818   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:17.899823   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:17.905616   22116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 10:41:17.906773   22116 pod_ready.go:92] pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:17.906800   22116 pod_ready.go:81] duration metric: took 2.51314231s for pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:17.906813   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6s5x7" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:17.988132   22116 request.go:629] Waited for 81.255115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6s5x7
	I0520 10:41:17.988191   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6s5x7
	I0520 10:41:17.988200   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:17.988211   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:17.988220   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:17.991659   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:18.188696   22116 request.go:629] Waited for 196.387999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:18.188759   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:18.188766   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:18.188779   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:18.188787   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:18.193609   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:41:18.194439   22116 pod_ready.go:92] pod "kube-proxy-6s5x7" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:18.194461   22116 pod_ready.go:81] duration metric: took 287.637901ms for pod "kube-proxy-6s5x7" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:18.194472   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dnhm6" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:18.388407   22116 request.go:629] Waited for 193.871401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnhm6
	I0520 10:41:18.388480   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnhm6
	I0520 10:41:18.388487   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:18.388497   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:18.388502   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:18.391681   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:18.588632   22116 request.go:629] Waited for 196.074265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:18.588698   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:18.588703   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:18.588710   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:18.588714   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:18.591868   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:18.592826   22116 pod_ready.go:92] pod "kube-proxy-dnhm6" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:18.592844   22116 pod_ready.go:81] duration metric: took 398.362862ms for pod "kube-proxy-dnhm6" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:18.592855   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:18.788777   22116 request.go:629] Waited for 195.860567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850
	I0520 10:41:18.788834   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850
	I0520 10:41:18.788839   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:18.788846   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:18.788850   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:18.791832   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:18.988873   22116 request.go:629] Waited for 196.332538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:18.988921   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:41:18.988933   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:18.988940   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:18.988944   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:18.992467   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:18.993478   22116 pod_ready.go:92] pod "kube-scheduler-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:18.993499   22116 pod_ready.go:81] duration metric: took 400.633573ms for pod "kube-scheduler-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:18.993512   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:19.188534   22116 request.go:629] Waited for 194.962605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850-m02
	I0520 10:41:19.188612   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850-m02
	I0520 10:41:19.188623   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:19.188632   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:19.188643   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:19.192357   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:19.388374   22116 request.go:629] Waited for 195.367276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:19.388425   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:41:19.388429   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:19.388436   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:19.388440   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:19.391758   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:19.392455   22116 pod_ready.go:92] pod "kube-scheduler-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:41:19.392473   22116 pod_ready.go:81] duration metric: took 398.952385ms for pod "kube-scheduler-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:41:19.392486   22116 pod_ready.go:38] duration metric: took 8.401004615s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:41:19.392505   22116 api_server.go:52] waiting for apiserver process to appear ...
	I0520 10:41:19.392562   22116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:41:19.415039   22116 api_server.go:72] duration metric: took 16.785398633s to wait for apiserver process to appear ...
	I0520 10:41:19.415062   22116 api_server.go:88] waiting for apiserver healthz status ...
	I0520 10:41:19.415077   22116 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0520 10:41:19.420981   22116 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I0520 10:41:19.421055   22116 round_trippers.go:463] GET https://192.168.39.152:8443/version
	I0520 10:41:19.421068   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:19.421077   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:19.421081   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:19.421914   22116 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0520 10:41:19.422069   22116 api_server.go:141] control plane version: v1.30.1
	I0520 10:41:19.422091   22116 api_server.go:131] duration metric: took 7.02303ms to wait for apiserver health ...
	I0520 10:41:19.422100   22116 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 10:41:19.588512   22116 request.go:629] Waited for 166.343985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:41:19.588594   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:41:19.588602   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:19.588612   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:19.588623   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:19.593489   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:41:19.598445   22116 system_pods.go:59] 17 kube-system pods found
	I0520 10:41:19.598470   22116 system_pods.go:61] "coredns-7db6d8ff4d-jw77g" [85653365-561a-4097-bda9-77121fd12b00] Running
	I0520 10:41:19.598476   22116 system_pods.go:61] "coredns-7db6d8ff4d-xr965" [e23a9ea0-3c6a-405a-9e09-d814ca8adeb0] Running
	I0520 10:41:19.598481   22116 system_pods.go:61] "etcd-ha-570850" [552d2142-ba50-4203-a6a0-4825c7cd9a5f] Running
	I0520 10:41:19.598486   22116 system_pods.go:61] "etcd-ha-570850-m02" [67bcc993-83b3-4331-86fe-dcc527947352] Running
	I0520 10:41:19.598490   22116 system_pods.go:61] "kindnet-2zphj" [2ddea332-9ab2-445d-a26e-148ad8cf9a77] Running
	I0520 10:41:19.598501   22116 system_pods.go:61] "kindnet-f5x4s" [7e4a78b0-a069-4013-a80e-6b0fee4c4f8e] Running
	I0520 10:41:19.598506   22116 system_pods.go:61] "kube-apiserver-ha-570850" [1ac0940a-3739-40ee-80f0-77b95690c907] Running
	I0520 10:41:19.598513   22116 system_pods.go:61] "kube-apiserver-ha-570850-m02" [8aa06b87-752a-427f-b2b8-504f609e3151] Running
	I0520 10:41:19.598516   22116 system_pods.go:61] "kube-controller-manager-ha-570850" [b5e7e18e-92bf-4566-8cc0-4333ee2d12aa] Running
	I0520 10:41:19.598521   22116 system_pods.go:61] "kube-controller-manager-ha-570850-m02" [796d4ac0-bf9d-431e-8c1d-f31dde806499] Running
	I0520 10:41:19.598524   22116 system_pods.go:61] "kube-proxy-6s5x7" [424c7748-985a-407a-b778-d57c97016c68] Running
	I0520 10:41:19.598527   22116 system_pods.go:61] "kube-proxy-dnhm6" [3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1] Running
	I0520 10:41:19.598531   22116 system_pods.go:61] "kube-scheduler-ha-570850" [0049e4d9-37d1-41ad-aad5-dbdc12c1af01] Running
	I0520 10:41:19.598535   22116 system_pods.go:61] "kube-scheduler-ha-570850-m02" [b2302685-93c6-477b-b0c5-d3cdb9c6fbc9] Running
	I0520 10:41:19.598538   22116 system_pods.go:61] "kube-vip-ha-570850" [f230fe6d-0d6e-4a3b-bbf8-694d256f1fde] Running
	I0520 10:41:19.598542   22116 system_pods.go:61] "kube-vip-ha-570850-m02" [07f5a1d4-f793-4417-bec6-5d9eab498365] Running
	I0520 10:41:19.598544   22116 system_pods.go:61] "storage-provisioner" [8cbdeaca-631d-449f-bb63-9ee4ec3f7a36] Running
	I0520 10:41:19.598549   22116 system_pods.go:74] duration metric: took 176.444033ms to wait for pod list to return data ...
	I0520 10:41:19.598557   22116 default_sa.go:34] waiting for default service account to be created ...
	I0520 10:41:19.787920   22116 request.go:629] Waited for 189.296478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I0520 10:41:19.787998   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I0520 10:41:19.788003   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:19.788010   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:19.788015   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:19.791012   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:41:19.791239   22116 default_sa.go:45] found service account: "default"
	I0520 10:41:19.791258   22116 default_sa.go:55] duration metric: took 192.694371ms for default service account to be created ...
	I0520 10:41:19.791267   22116 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 10:41:19.988481   22116 request.go:629] Waited for 197.157713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:41:19.988532   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:41:19.988537   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:19.988544   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:19.988552   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:19.994353   22116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 10:41:19.999194   22116 system_pods.go:86] 17 kube-system pods found
	I0520 10:41:19.999216   22116 system_pods.go:89] "coredns-7db6d8ff4d-jw77g" [85653365-561a-4097-bda9-77121fd12b00] Running
	I0520 10:41:19.999223   22116 system_pods.go:89] "coredns-7db6d8ff4d-xr965" [e23a9ea0-3c6a-405a-9e09-d814ca8adeb0] Running
	I0520 10:41:19.999230   22116 system_pods.go:89] "etcd-ha-570850" [552d2142-ba50-4203-a6a0-4825c7cd9a5f] Running
	I0520 10:41:19.999241   22116 system_pods.go:89] "etcd-ha-570850-m02" [67bcc993-83b3-4331-86fe-dcc527947352] Running
	I0520 10:41:19.999247   22116 system_pods.go:89] "kindnet-2zphj" [2ddea332-9ab2-445d-a26e-148ad8cf9a77] Running
	I0520 10:41:19.999253   22116 system_pods.go:89] "kindnet-f5x4s" [7e4a78b0-a069-4013-a80e-6b0fee4c4f8e] Running
	I0520 10:41:19.999262   22116 system_pods.go:89] "kube-apiserver-ha-570850" [1ac0940a-3739-40ee-80f0-77b95690c907] Running
	I0520 10:41:19.999268   22116 system_pods.go:89] "kube-apiserver-ha-570850-m02" [8aa06b87-752a-427f-b2b8-504f609e3151] Running
	I0520 10:41:19.999273   22116 system_pods.go:89] "kube-controller-manager-ha-570850" [b5e7e18e-92bf-4566-8cc0-4333ee2d12aa] Running
	I0520 10:41:19.999278   22116 system_pods.go:89] "kube-controller-manager-ha-570850-m02" [796d4ac0-bf9d-431e-8c1d-f31dde806499] Running
	I0520 10:41:19.999282   22116 system_pods.go:89] "kube-proxy-6s5x7" [424c7748-985a-407a-b778-d57c97016c68] Running
	I0520 10:41:19.999286   22116 system_pods.go:89] "kube-proxy-dnhm6" [3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1] Running
	I0520 10:41:19.999290   22116 system_pods.go:89] "kube-scheduler-ha-570850" [0049e4d9-37d1-41ad-aad5-dbdc12c1af01] Running
	I0520 10:41:19.999294   22116 system_pods.go:89] "kube-scheduler-ha-570850-m02" [b2302685-93c6-477b-b0c5-d3cdb9c6fbc9] Running
	I0520 10:41:19.999298   22116 system_pods.go:89] "kube-vip-ha-570850" [f230fe6d-0d6e-4a3b-bbf8-694d256f1fde] Running
	I0520 10:41:19.999302   22116 system_pods.go:89] "kube-vip-ha-570850-m02" [07f5a1d4-f793-4417-bec6-5d9eab498365] Running
	I0520 10:41:19.999305   22116 system_pods.go:89] "storage-provisioner" [8cbdeaca-631d-449f-bb63-9ee4ec3f7a36] Running
	I0520 10:41:19.999311   22116 system_pods.go:126] duration metric: took 208.03832ms to wait for k8s-apps to be running ...
	I0520 10:41:19.999319   22116 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 10:41:19.999370   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:41:20.019342   22116 system_svc.go:56] duration metric: took 20.01456ms WaitForService to wait for kubelet
	I0520 10:41:20.019366   22116 kubeadm.go:576] duration metric: took 17.389728118s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:41:20.019383   22116 node_conditions.go:102] verifying NodePressure condition ...
	I0520 10:41:20.188818   22116 request.go:629] Waited for 169.356099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I0520 10:41:20.188884   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes
	I0520 10:41:20.188890   22116 round_trippers.go:469] Request Headers:
	I0520 10:41:20.188896   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:41:20.188900   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:41:20.192704   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:41:20.193489   22116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 10:41:20.193510   22116 node_conditions.go:123] node cpu capacity is 2
	I0520 10:41:20.193521   22116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 10:41:20.193525   22116 node_conditions.go:123] node cpu capacity is 2
	I0520 10:41:20.193529   22116 node_conditions.go:105] duration metric: took 174.142433ms to run NodePressure ...
	I0520 10:41:20.193538   22116 start.go:240] waiting for startup goroutines ...
	I0520 10:41:20.193560   22116 start.go:254] writing updated cluster config ...
	I0520 10:41:20.195606   22116 out.go:177] 
	I0520 10:41:20.196922   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:41:20.197059   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:41:20.198510   22116 out.go:177] * Starting "ha-570850-m03" control-plane node in "ha-570850" cluster
	I0520 10:41:20.199598   22116 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:41:20.199617   22116 cache.go:56] Caching tarball of preloaded images
	I0520 10:41:20.199712   22116 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 10:41:20.199724   22116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:41:20.199822   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:41:20.199985   22116 start.go:360] acquireMachinesLock for ha-570850-m03: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 10:41:20.200036   22116 start.go:364] duration metric: took 30.181µs to acquireMachinesLock for "ha-570850-m03"
	I0520 10:41:20.200060   22116 start.go:93] Provisioning new machine with config: &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:41:20.200161   22116 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0520 10:41:20.201518   22116 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 10:41:20.201601   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:41:20.201651   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:41:20.216212   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0520 10:41:20.216563   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:41:20.217029   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:41:20.217052   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:41:20.217397   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:41:20.217563   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetMachineName
	I0520 10:41:20.217695   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:20.217863   22116 start.go:159] libmachine.API.Create for "ha-570850" (driver="kvm2")
	I0520 10:41:20.217888   22116 client.go:168] LocalClient.Create starting
	I0520 10:41:20.217916   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 10:41:20.217946   22116 main.go:141] libmachine: Decoding PEM data...
	I0520 10:41:20.217959   22116 main.go:141] libmachine: Parsing certificate...
	I0520 10:41:20.218011   22116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 10:41:20.218029   22116 main.go:141] libmachine: Decoding PEM data...
	I0520 10:41:20.218046   22116 main.go:141] libmachine: Parsing certificate...
	I0520 10:41:20.218063   22116 main.go:141] libmachine: Running pre-create checks...
	I0520 10:41:20.218071   22116 main.go:141] libmachine: (ha-570850-m03) Calling .PreCreateCheck
	I0520 10:41:20.218242   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetConfigRaw
	I0520 10:41:20.218600   22116 main.go:141] libmachine: Creating machine...
	I0520 10:41:20.218614   22116 main.go:141] libmachine: (ha-570850-m03) Calling .Create
	I0520 10:41:20.218735   22116 main.go:141] libmachine: (ha-570850-m03) Creating KVM machine...
	I0520 10:41:20.219883   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found existing default KVM network
	I0520 10:41:20.219964   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found existing private KVM network mk-ha-570850
	I0520 10:41:20.220062   22116 main.go:141] libmachine: (ha-570850-m03) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03 ...
	I0520 10:41:20.220085   22116 main.go:141] libmachine: (ha-570850-m03) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 10:41:20.220143   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:20.220042   23111 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:41:20.220223   22116 main.go:141] libmachine: (ha-570850-m03) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 10:41:20.430926   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:20.430801   23111 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa...
	I0520 10:41:20.731910   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:20.731810   23111 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/ha-570850-m03.rawdisk...
	I0520 10:41:20.731937   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Writing magic tar header
	I0520 10:41:20.731950   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Writing SSH key tar header
	I0520 10:41:20.731962   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:20.731914   23111 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03 ...
	I0520 10:41:20.732013   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03
	I0520 10:41:20.732048   22116 main.go:141] libmachine: (ha-570850-m03) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03 (perms=drwx------)
	I0520 10:41:20.732079   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 10:41:20.732092   22116 main.go:141] libmachine: (ha-570850-m03) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 10:41:20.732099   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:41:20.732111   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 10:41:20.732118   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 10:41:20.732129   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home/jenkins
	I0520 10:41:20.732137   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Checking permissions on dir: /home
	I0520 10:41:20.732153   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Skipping /home - not owner
	I0520 10:41:20.732167   22116 main.go:141] libmachine: (ha-570850-m03) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 10:41:20.732186   22116 main.go:141] libmachine: (ha-570850-m03) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 10:41:20.732196   22116 main.go:141] libmachine: (ha-570850-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 10:41:20.732201   22116 main.go:141] libmachine: (ha-570850-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 10:41:20.732209   22116 main.go:141] libmachine: (ha-570850-m03) Creating domain...
	I0520 10:41:20.733237   22116 main.go:141] libmachine: (ha-570850-m03) define libvirt domain using xml: 
	I0520 10:41:20.733252   22116 main.go:141] libmachine: (ha-570850-m03) <domain type='kvm'>
	I0520 10:41:20.733260   22116 main.go:141] libmachine: (ha-570850-m03)   <name>ha-570850-m03</name>
	I0520 10:41:20.733268   22116 main.go:141] libmachine: (ha-570850-m03)   <memory unit='MiB'>2200</memory>
	I0520 10:41:20.733295   22116 main.go:141] libmachine: (ha-570850-m03)   <vcpu>2</vcpu>
	I0520 10:41:20.733318   22116 main.go:141] libmachine: (ha-570850-m03)   <features>
	I0520 10:41:20.733330   22116 main.go:141] libmachine: (ha-570850-m03)     <acpi/>
	I0520 10:41:20.733340   22116 main.go:141] libmachine: (ha-570850-m03)     <apic/>
	I0520 10:41:20.733349   22116 main.go:141] libmachine: (ha-570850-m03)     <pae/>
	I0520 10:41:20.733361   22116 main.go:141] libmachine: (ha-570850-m03)     
	I0520 10:41:20.733372   22116 main.go:141] libmachine: (ha-570850-m03)   </features>
	I0520 10:41:20.733378   22116 main.go:141] libmachine: (ha-570850-m03)   <cpu mode='host-passthrough'>
	I0520 10:41:20.733391   22116 main.go:141] libmachine: (ha-570850-m03)   
	I0520 10:41:20.733404   22116 main.go:141] libmachine: (ha-570850-m03)   </cpu>
	I0520 10:41:20.733416   22116 main.go:141] libmachine: (ha-570850-m03)   <os>
	I0520 10:41:20.733427   22116 main.go:141] libmachine: (ha-570850-m03)     <type>hvm</type>
	I0520 10:41:20.733439   22116 main.go:141] libmachine: (ha-570850-m03)     <boot dev='cdrom'/>
	I0520 10:41:20.733449   22116 main.go:141] libmachine: (ha-570850-m03)     <boot dev='hd'/>
	I0520 10:41:20.733460   22116 main.go:141] libmachine: (ha-570850-m03)     <bootmenu enable='no'/>
	I0520 10:41:20.733465   22116 main.go:141] libmachine: (ha-570850-m03)   </os>
	I0520 10:41:20.733475   22116 main.go:141] libmachine: (ha-570850-m03)   <devices>
	I0520 10:41:20.733492   22116 main.go:141] libmachine: (ha-570850-m03)     <disk type='file' device='cdrom'>
	I0520 10:41:20.733508   22116 main.go:141] libmachine: (ha-570850-m03)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/boot2docker.iso'/>
	I0520 10:41:20.733519   22116 main.go:141] libmachine: (ha-570850-m03)       <target dev='hdc' bus='scsi'/>
	I0520 10:41:20.733528   22116 main.go:141] libmachine: (ha-570850-m03)       <readonly/>
	I0520 10:41:20.733538   22116 main.go:141] libmachine: (ha-570850-m03)     </disk>
	I0520 10:41:20.733548   22116 main.go:141] libmachine: (ha-570850-m03)     <disk type='file' device='disk'>
	I0520 10:41:20.733556   22116 main.go:141] libmachine: (ha-570850-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 10:41:20.733570   22116 main.go:141] libmachine: (ha-570850-m03)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/ha-570850-m03.rawdisk'/>
	I0520 10:41:20.733586   22116 main.go:141] libmachine: (ha-570850-m03)       <target dev='hda' bus='virtio'/>
	I0520 10:41:20.733597   22116 main.go:141] libmachine: (ha-570850-m03)     </disk>
	I0520 10:41:20.733605   22116 main.go:141] libmachine: (ha-570850-m03)     <interface type='network'>
	I0520 10:41:20.733620   22116 main.go:141] libmachine: (ha-570850-m03)       <source network='mk-ha-570850'/>
	I0520 10:41:20.733630   22116 main.go:141] libmachine: (ha-570850-m03)       <model type='virtio'/>
	I0520 10:41:20.733639   22116 main.go:141] libmachine: (ha-570850-m03)     </interface>
	I0520 10:41:20.733646   22116 main.go:141] libmachine: (ha-570850-m03)     <interface type='network'>
	I0520 10:41:20.733674   22116 main.go:141] libmachine: (ha-570850-m03)       <source network='default'/>
	I0520 10:41:20.733696   22116 main.go:141] libmachine: (ha-570850-m03)       <model type='virtio'/>
	I0520 10:41:20.733710   22116 main.go:141] libmachine: (ha-570850-m03)     </interface>
	I0520 10:41:20.733721   22116 main.go:141] libmachine: (ha-570850-m03)     <serial type='pty'>
	I0520 10:41:20.733734   22116 main.go:141] libmachine: (ha-570850-m03)       <target port='0'/>
	I0520 10:41:20.733745   22116 main.go:141] libmachine: (ha-570850-m03)     </serial>
	I0520 10:41:20.733755   22116 main.go:141] libmachine: (ha-570850-m03)     <console type='pty'>
	I0520 10:41:20.733764   22116 main.go:141] libmachine: (ha-570850-m03)       <target type='serial' port='0'/>
	I0520 10:41:20.733785   22116 main.go:141] libmachine: (ha-570850-m03)     </console>
	I0520 10:41:20.733804   22116 main.go:141] libmachine: (ha-570850-m03)     <rng model='virtio'>
	I0520 10:41:20.733819   22116 main.go:141] libmachine: (ha-570850-m03)       <backend model='random'>/dev/random</backend>
	I0520 10:41:20.733826   22116 main.go:141] libmachine: (ha-570850-m03)     </rng>
	I0520 10:41:20.733833   22116 main.go:141] libmachine: (ha-570850-m03)     
	I0520 10:41:20.733842   22116 main.go:141] libmachine: (ha-570850-m03)     
	I0520 10:41:20.733854   22116 main.go:141] libmachine: (ha-570850-m03)   </devices>
	I0520 10:41:20.733865   22116 main.go:141] libmachine: (ha-570850-m03) </domain>
	I0520 10:41:20.733883   22116 main.go:141] libmachine: (ha-570850-m03) 
	I0520 10:41:20.740500   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:ed:f4:db in network default
	I0520 10:41:20.741087   22116 main.go:141] libmachine: (ha-570850-m03) Ensuring networks are active...
	I0520 10:41:20.741104   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:20.741829   22116 main.go:141] libmachine: (ha-570850-m03) Ensuring network default is active
	I0520 10:41:20.742208   22116 main.go:141] libmachine: (ha-570850-m03) Ensuring network mk-ha-570850 is active
	I0520 10:41:20.742551   22116 main.go:141] libmachine: (ha-570850-m03) Getting domain xml...
	I0520 10:41:20.743245   22116 main.go:141] libmachine: (ha-570850-m03) Creating domain...
	I0520 10:41:21.960333   22116 main.go:141] libmachine: (ha-570850-m03) Waiting to get IP...
	I0520 10:41:21.961050   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:21.961460   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:21.961514   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:21.961439   23111 retry.go:31] will retry after 268.018752ms: waiting for machine to come up
	I0520 10:41:22.230979   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:22.231446   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:22.231470   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:22.231400   23111 retry.go:31] will retry after 287.26263ms: waiting for machine to come up
	I0520 10:41:22.519701   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:22.520118   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:22.520142   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:22.520074   23111 retry.go:31] will retry after 324.707914ms: waiting for machine to come up
	I0520 10:41:22.846403   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:22.846978   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:22.847005   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:22.846933   23111 retry.go:31] will retry after 379.38423ms: waiting for machine to come up
	I0520 10:41:23.227427   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:23.227894   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:23.227922   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:23.227853   23111 retry.go:31] will retry after 502.147306ms: waiting for machine to come up
	I0520 10:41:23.731472   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:23.731950   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:23.731980   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:23.731893   23111 retry.go:31] will retry after 867.897028ms: waiting for machine to come up
	I0520 10:41:24.601641   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:24.602172   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:24.602200   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:24.602129   23111 retry.go:31] will retry after 758.230217ms: waiting for machine to come up
	I0520 10:41:25.361833   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:25.362238   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:25.362264   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:25.362202   23111 retry.go:31] will retry after 1.041261872s: waiting for machine to come up
	I0520 10:41:26.405315   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:26.405801   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:26.405831   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:26.405749   23111 retry.go:31] will retry after 1.248321269s: waiting for machine to come up
	I0520 10:41:27.655273   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:27.655670   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:27.655692   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:27.655654   23111 retry.go:31] will retry after 1.816923512s: waiting for machine to come up
	I0520 10:41:29.474155   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:29.474630   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:29.474652   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:29.474580   23111 retry.go:31] will retry after 1.809298154s: waiting for machine to come up
	I0520 10:41:31.285822   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:31.286229   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:31.286253   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:31.286179   23111 retry.go:31] will retry after 2.419090022s: waiting for machine to come up
	I0520 10:41:33.707009   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:33.707672   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:33.707698   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:33.707629   23111 retry.go:31] will retry after 4.274360948s: waiting for machine to come up
	I0520 10:41:37.983834   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:37.984218   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find current IP address of domain ha-570850-m03 in network mk-ha-570850
	I0520 10:41:37.984240   22116 main.go:141] libmachine: (ha-570850-m03) DBG | I0520 10:41:37.984182   23111 retry.go:31] will retry after 3.561976174s: waiting for machine to come up
	I0520 10:41:41.547233   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.547761   22116 main.go:141] libmachine: (ha-570850-m03) Found IP for machine: 192.168.39.240
	I0520 10:41:41.547791   22116 main.go:141] libmachine: (ha-570850-m03) Reserving static IP address...
	I0520 10:41:41.547806   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has current primary IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.548185   22116 main.go:141] libmachine: (ha-570850-m03) DBG | unable to find host DHCP lease matching {name: "ha-570850-m03", mac: "52:54:00:7b:76:2b", ip: "192.168.39.240"} in network mk-ha-570850
	I0520 10:41:41.621513   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Getting to WaitForSSH function...
	I0520 10:41:41.621537   22116 main.go:141] libmachine: (ha-570850-m03) Reserved static IP address: 192.168.39.240
	I0520 10:41:41.621548   22116 main.go:141] libmachine: (ha-570850-m03) Waiting for SSH to be available...
	I0520 10:41:41.624355   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.624794   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:41.624816   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.624978   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Using SSH client type: external
	I0520 10:41:41.625019   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa (-rw-------)
	I0520 10:41:41.625049   22116 main.go:141] libmachine: (ha-570850-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 10:41:41.625059   22116 main.go:141] libmachine: (ha-570850-m03) DBG | About to run SSH command:
	I0520 10:41:41.625070   22116 main.go:141] libmachine: (ha-570850-m03) DBG | exit 0
	I0520 10:41:41.752972   22116 main.go:141] libmachine: (ha-570850-m03) DBG | SSH cmd err, output: <nil>: 
	I0520 10:41:41.753241   22116 main.go:141] libmachine: (ha-570850-m03) KVM machine creation complete!
	I0520 10:41:41.753527   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetConfigRaw
	I0520 10:41:41.754036   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:41.754199   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:41.754368   22116 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 10:41:41.754382   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetState
	I0520 10:41:41.755637   22116 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 10:41:41.755651   22116 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 10:41:41.755656   22116 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 10:41:41.755661   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:41.758103   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.758459   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:41.758486   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.758625   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:41.758789   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:41.758970   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:41.759110   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:41.759279   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:41:41.759468   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0520 10:41:41.759478   22116 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 10:41:41.868268   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:41:41.868294   22116 main.go:141] libmachine: Detecting the provisioner...
	I0520 10:41:41.868304   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:41.871139   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.871529   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:41.871575   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.871670   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:41.871879   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:41.872058   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:41.872232   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:41.872412   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:41:41.872619   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0520 10:41:41.872632   22116 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 10:41:41.985643   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 10:41:41.985714   22116 main.go:141] libmachine: found compatible host: buildroot
	I0520 10:41:41.985728   22116 main.go:141] libmachine: Provisioning with buildroot...
	I0520 10:41:41.985740   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetMachineName
	I0520 10:41:41.985999   22116 buildroot.go:166] provisioning hostname "ha-570850-m03"
	I0520 10:41:41.986029   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetMachineName
	I0520 10:41:41.986232   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:41.988748   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.989221   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:41.989258   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:41.989431   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:41.989622   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:41.989776   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:41.989952   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:41.990095   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:41:41.990291   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0520 10:41:41.990306   22116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-570850-m03 && echo "ha-570850-m03" | sudo tee /etc/hostname
	I0520 10:41:42.116021   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850-m03
	
	I0520 10:41:42.116051   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:42.118763   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.119103   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.119133   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.119313   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:42.119513   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:42.119666   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:42.119801   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:42.119958   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:41:42.120137   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0520 10:41:42.120158   22116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-570850-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-570850-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-570850-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:41:42.238040   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:41:42.238073   22116 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 10:41:42.238089   22116 buildroot.go:174] setting up certificates
	I0520 10:41:42.238119   22116 provision.go:84] configureAuth start
	I0520 10:41:42.238130   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetMachineName
	I0520 10:41:42.238390   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:41:42.241156   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.241500   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.241522   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.241693   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:42.243860   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.244276   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.244305   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.244419   22116 provision.go:143] copyHostCerts
	I0520 10:41:42.244453   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:41:42.244494   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 10:41:42.244505   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:41:42.244596   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 10:41:42.244679   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:41:42.244704   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 10:41:42.244714   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:41:42.244750   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 10:41:42.244804   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:41:42.244829   22116 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 10:41:42.244838   22116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:41:42.244873   22116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 10:41:42.244954   22116 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.ha-570850-m03 san=[127.0.0.1 192.168.39.240 ha-570850-m03 localhost minikube]
	I0520 10:41:42.416951   22116 provision.go:177] copyRemoteCerts
	I0520 10:41:42.417012   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:41:42.417040   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:42.419714   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.420020   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.420048   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.420233   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:42.420401   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:42.420542   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:42.420646   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:41:42.508180   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 10:41:42.508245   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:41:42.538008   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 10:41:42.538081   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 10:41:42.562031   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 10:41:42.562115   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 10:41:42.588098   22116 provision.go:87] duration metric: took 349.962755ms to configureAuth
	I0520 10:41:42.588126   22116 buildroot.go:189] setting minikube options for container-runtime
	I0520 10:41:42.588323   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:41:42.588402   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:42.590889   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.591250   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.591304   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.591423   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:42.591608   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:42.591765   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:42.591914   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:42.592034   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:41:42.592200   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0520 10:41:42.592219   22116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:41:42.865113   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:41:42.865155   22116 main.go:141] libmachine: Checking connection to Docker...
	I0520 10:41:42.865166   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetURL
	I0520 10:41:42.866490   22116 main.go:141] libmachine: (ha-570850-m03) DBG | Using libvirt version 6000000
	I0520 10:41:42.869020   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.869337   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.869376   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.869565   22116 main.go:141] libmachine: Docker is up and running!
	I0520 10:41:42.869583   22116 main.go:141] libmachine: Reticulating splines...
	I0520 10:41:42.869592   22116 client.go:171] duration metric: took 22.651695288s to LocalClient.Create
	I0520 10:41:42.869618   22116 start.go:167] duration metric: took 22.651753943s to libmachine.API.Create "ha-570850"
	I0520 10:41:42.869628   22116 start.go:293] postStartSetup for "ha-570850-m03" (driver="kvm2")
	I0520 10:41:42.869641   22116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:41:42.869678   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:42.869908   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:41:42.869949   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:42.872077   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.872476   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:42.872502   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:42.872654   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:42.872816   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:42.872998   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:42.873145   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:41:42.959873   22116 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:41:42.964663   22116 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 10:41:42.964686   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 10:41:42.964773   22116 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 10:41:42.964872   22116 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 10:41:42.964883   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /etc/ssl/certs/111622.pem
	I0520 10:41:42.965018   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 10:41:42.975890   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:41:43.001297   22116 start.go:296] duration metric: took 131.655107ms for postStartSetup
	I0520 10:41:43.001352   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetConfigRaw
	I0520 10:41:43.001902   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:41:43.004488   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.004878   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:43.004909   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.005294   22116 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:41:43.005502   22116 start.go:128] duration metric: took 22.805329464s to createHost
	I0520 10:41:43.005551   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:43.007733   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.008132   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:43.008164   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.008297   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:43.008470   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:43.008628   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:43.008817   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:43.008989   22116 main.go:141] libmachine: Using SSH client type: native
	I0520 10:41:43.009162   22116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0520 10:41:43.009175   22116 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 10:41:43.121728   22116 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716201703.095618454
	
	I0520 10:41:43.121746   22116 fix.go:216] guest clock: 1716201703.095618454
	I0520 10:41:43.121753   22116 fix.go:229] Guest: 2024-05-20 10:41:43.095618454 +0000 UTC Remote: 2024-05-20 10:41:43.005516012 +0000 UTC m=+211.310919977 (delta=90.102442ms)
	I0520 10:41:43.121766   22116 fix.go:200] guest clock delta is within tolerance: 90.102442ms
	I0520 10:41:43.121771   22116 start.go:83] releasing machines lock for "ha-570850-m03", held for 22.921724286s
	I0520 10:41:43.121806   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:43.122104   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:41:43.124936   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.125272   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:43.125303   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.127628   22116 out.go:177] * Found network options:
	I0520 10:41:43.128949   22116 out.go:177]   - NO_PROXY=192.168.39.152,192.168.39.111
	W0520 10:41:43.130231   22116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 10:41:43.130253   22116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 10:41:43.130267   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:43.130888   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:43.131100   22116 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:41:43.131199   22116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:41:43.131241   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	W0520 10:41:43.131287   22116 proxy.go:119] fail to check proxy env: Error ip not in block
	W0520 10:41:43.131308   22116 proxy.go:119] fail to check proxy env: Error ip not in block
	I0520 10:41:43.131366   22116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:41:43.131389   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:41:43.133884   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.134148   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.134379   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:43.134405   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.134436   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:43.134613   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:43.134618   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:43.134651   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:43.134786   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:43.134819   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:41:43.134998   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:41:43.134989   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:41:43.135180   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:41:43.135326   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:41:43.376620   22116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 10:41:43.382897   22116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 10:41:43.382956   22116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:41:43.399669   22116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 10:41:43.399698   22116 start.go:494] detecting cgroup driver to use...
	I0520 10:41:43.399747   22116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:41:43.419089   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:41:43.433639   22116 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:41:43.433698   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:41:43.446764   22116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:41:43.460806   22116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:41:43.595635   22116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:41:43.758034   22116 docker.go:233] disabling docker service ...
	I0520 10:41:43.758111   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:41:43.771659   22116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:41:43.784975   22116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:41:43.922908   22116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:41:44.047300   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:41:44.061537   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:41:44.080731   22116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:41:44.080783   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.090964   22116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:41:44.091021   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.100922   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.110659   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.120519   22116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:41:44.132082   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.142086   22116 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.160922   22116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:41:44.171514   22116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:41:44.181331   22116 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 10:41:44.181373   22116 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 10:41:44.194162   22116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:41:44.203326   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:41:44.319558   22116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:41:44.467225   22116 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:41:44.467297   22116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:41:44.473349   22116 start.go:562] Will wait 60s for crictl version
	I0520 10:41:44.473408   22116 ssh_runner.go:195] Run: which crictl
	I0520 10:41:44.477055   22116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:41:44.518961   22116 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 10:41:44.519038   22116 ssh_runner.go:195] Run: crio --version
	I0520 10:41:44.549199   22116 ssh_runner.go:195] Run: crio --version
	I0520 10:41:44.582808   22116 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 10:41:44.584239   22116 out.go:177]   - env NO_PROXY=192.168.39.152
	I0520 10:41:44.585651   22116 out.go:177]   - env NO_PROXY=192.168.39.152,192.168.39.111
	I0520 10:41:44.586922   22116 main.go:141] libmachine: (ha-570850-m03) Calling .GetIP
	I0520 10:41:44.589867   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:44.590250   22116 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:41:44.590275   22116 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:41:44.590465   22116 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 10:41:44.594866   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:41:44.607801   22116 mustload.go:65] Loading cluster: ha-570850
	I0520 10:41:44.607995   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:41:44.608230   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:41:44.608267   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:41:44.622461   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44717
	I0520 10:41:44.622817   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:41:44.623230   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:41:44.623250   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:41:44.623538   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:41:44.623725   22116 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:41:44.625308   22116 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:41:44.625651   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:41:44.625685   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:41:44.639873   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43981
	I0520 10:41:44.640316   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:41:44.640787   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:41:44.640806   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:41:44.641123   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:41:44.641320   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:41:44.641461   22116 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850 for IP: 192.168.39.240
	I0520 10:41:44.641472   22116 certs.go:194] generating shared ca certs ...
	I0520 10:41:44.641484   22116 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:41:44.641592   22116 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 10:41:44.641628   22116 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 10:41:44.641637   22116 certs.go:256] generating profile certs ...
	I0520 10:41:44.641699   22116 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key
	I0520 10:41:44.641722   22116 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.8e7a634d
	I0520 10:41:44.641737   22116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.8e7a634d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.111 192.168.39.240 192.168.39.254]
	I0520 10:41:44.747823   22116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.8e7a634d ...
	I0520 10:41:44.747850   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.8e7a634d: {Name:mk750edf2e0080dc0e237e62210a814352975f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:41:44.748006   22116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.8e7a634d ...
	I0520 10:41:44.748018   22116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.8e7a634d: {Name:mkdfc7e4adc2c900ccc75ceac6cc66de69332fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:41:44.748089   22116 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.8e7a634d -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt
	I0520 10:41:44.748215   22116 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.8e7a634d -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key
	I0520 10:41:44.748339   22116 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key
	I0520 10:41:44.748353   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 10:41:44.748369   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 10:41:44.748381   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 10:41:44.748393   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 10:41:44.748405   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 10:41:44.748417   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 10:41:44.748429   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 10:41:44.748440   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 10:41:44.748486   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 10:41:44.748511   22116 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 10:41:44.748520   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 10:41:44.748542   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:41:44.748563   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:41:44.748583   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 10:41:44.748617   22116 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:41:44.748643   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:41:44.748657   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem -> /usr/share/ca-certificates/11162.pem
	I0520 10:41:44.748670   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /usr/share/ca-certificates/111622.pem
	I0520 10:41:44.748699   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:41:44.751816   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:41:44.752187   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:41:44.752215   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:41:44.752414   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:41:44.752605   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:41:44.752811   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:41:44.752978   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:41:44.829239   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0520 10:41:44.835382   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0520 10:41:44.848371   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0520 10:41:44.853059   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0520 10:41:44.863419   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0520 10:41:44.867919   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0520 10:41:44.879256   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0520 10:41:44.883516   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0520 10:41:44.894057   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0520 10:41:44.898578   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0520 10:41:44.910264   22116 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0520 10:41:44.915872   22116 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0520 10:41:44.926900   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:41:44.952560   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:41:44.980607   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:41:45.005074   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 10:41:45.033449   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0520 10:41:45.060496   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 10:41:45.085131   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:41:45.109723   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:41:45.135217   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:41:45.159646   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 10:41:45.186572   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 10:41:45.213043   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0520 10:41:45.230302   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0520 10:41:45.246831   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0520 10:41:45.264322   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0520 10:41:45.281986   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0520 10:41:45.298302   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0520 10:41:45.317492   22116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0520 10:41:45.336015   22116 ssh_runner.go:195] Run: openssl version
	I0520 10:41:45.341596   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:41:45.352455   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:41:45.356910   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:41:45.356964   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:41:45.362710   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:41:45.373190   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 10:41:45.383648   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 10:41:45.388178   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 10:41:45.388217   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 10:41:45.394008   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 10:41:45.404367   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 10:41:45.414605   22116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 10:41:45.419035   22116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 10:41:45.419072   22116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 10:41:45.425800   22116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 10:41:45.436078   22116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:41:45.439971   22116 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 10:41:45.440012   22116 kubeadm.go:928] updating node {m03 192.168.39.240 8443 v1.30.1 crio true true} ...
	I0520 10:41:45.440093   22116 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-570850-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:41:45.440117   22116 kube-vip.go:115] generating kube-vip config ...
	I0520 10:41:45.440141   22116 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 10:41:45.455646   22116 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 10:41:45.455729   22116 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 10:41:45.455784   22116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:41:45.465614   22116 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 10:41:45.465659   22116 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 10:41:45.475456   22116 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0520 10:41:45.475467   22116 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0520 10:41:45.475484   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 10:41:45.475500   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:41:45.475457   22116 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 10:41:45.475571   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 10:41:45.475554   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 10:41:45.475642   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 10:41:45.490052   22116 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 10:41:45.490088   22116 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 10:41:45.490114   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 10:41:45.490121   22116 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 10:41:45.490090   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 10:41:45.490212   22116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 10:41:45.501428   22116 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 10:41:45.501467   22116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 10:41:46.371500   22116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0520 10:41:46.381788   22116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0520 10:41:46.399004   22116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:41:46.416922   22116 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 10:41:46.433919   22116 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 10:41:46.438011   22116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:41:46.450512   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:41:46.595259   22116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:41:46.613101   22116 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:41:46.613499   22116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:41:46.613552   22116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:41:46.629013   22116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44267
	I0520 10:41:46.629389   22116 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:41:46.629867   22116 main.go:141] libmachine: Using API Version  1
	I0520 10:41:46.629891   22116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:41:46.630264   22116 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:41:46.630474   22116 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:41:46.630629   22116 start.go:316] joinCluster: &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:41:46.630750   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0520 10:41:46.630773   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:41:46.633565   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:41:46.634052   22116 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:41:46.634081   22116 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:41:46.634237   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:41:46.634395   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:41:46.634547   22116 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:41:46.634706   22116 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:41:46.798210   22116 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:41:46.798262   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6795am.2royruoiinmyx7oy --discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-570850-m03 --control-plane --apiserver-advertise-address=192.168.39.240 --apiserver-bind-port=8443"
	I0520 10:42:09.766831   22116 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6795am.2royruoiinmyx7oy --discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-570850-m03 --control-plane --apiserver-advertise-address=192.168.39.240 --apiserver-bind-port=8443": (22.968489292s)
	I0520 10:42:09.766902   22116 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0520 10:42:10.368895   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-570850-m03 minikube.k8s.io/updated_at=2024_05_20T10_42_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=ha-570850 minikube.k8s.io/primary=false
	I0520 10:42:10.482377   22116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-570850-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0520 10:42:10.583198   22116 start.go:318] duration metric: took 23.952566788s to joinCluster
	I0520 10:42:10.583301   22116 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 10:42:10.584552   22116 out.go:177] * Verifying Kubernetes components...
	I0520 10:42:10.583716   22116 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:42:10.585786   22116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:42:10.843675   22116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:42:10.874072   22116 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:42:10.874303   22116 kapi.go:59] client config for ha-570850: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.crt", KeyFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key", CAFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0520 10:42:10.874372   22116 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.152:8443
	I0520 10:42:10.874558   22116 node_ready.go:35] waiting up to 6m0s for node "ha-570850-m03" to be "Ready" ...
	I0520 10:42:10.874670   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:10.874679   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:10.874686   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:10.874689   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:10.878086   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:11.374805   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:11.374828   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:11.374837   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:11.374842   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:11.378395   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:11.875438   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:11.875461   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:11.875471   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:11.875475   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:11.878572   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:12.375000   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:12.375020   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:12.375032   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:12.375037   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:12.378978   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:12.874936   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:12.874956   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:12.874963   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:12.874968   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:12.878153   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:12.878912   22116 node_ready.go:53] node "ha-570850-m03" has status "Ready":"False"
	I0520 10:42:13.374715   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:13.374737   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:13.374745   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:13.374749   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:13.378127   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:13.875403   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:13.875430   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:13.875441   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:13.875446   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:13.879182   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:14.375415   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:14.375437   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:14.375444   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:14.375450   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:14.379355   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:14.875106   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:14.875126   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:14.875134   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:14.875137   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:14.878697   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:14.879422   22116 node_ready.go:53] node "ha-570850-m03" has status "Ready":"False"
	I0520 10:42:15.375080   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:15.375101   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:15.375109   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:15.375113   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:15.378574   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:15.875287   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:15.875311   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:15.875322   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:15.875330   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:15.879125   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:16.375244   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:16.375264   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:16.375272   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:16.375274   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:16.379217   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:16.875344   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:16.875364   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:16.875372   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:16.875376   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:16.879395   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:16.879834   22116 node_ready.go:53] node "ha-570850-m03" has status "Ready":"False"
	I0520 10:42:17.375630   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:17.375658   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.375669   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.375676   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.379557   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:17.875436   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:17.875462   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.875473   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.875481   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.878707   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:17.879333   22116 node_ready.go:49] node "ha-570850-m03" has status "Ready":"True"
	I0520 10:42:17.879350   22116 node_ready.go:38] duration metric: took 7.004780472s for node "ha-570850-m03" to be "Ready" ...
	I0520 10:42:17.879359   22116 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:42:17.879407   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:42:17.879416   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.879423   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.879427   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.885402   22116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 10:42:17.891971   22116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jw77g" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.892046   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jw77g
	I0520 10:42:17.892057   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.892067   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.892075   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.894683   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:17.895475   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:17.895489   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.895496   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.895501   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.898059   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:17.898666   22116 pod_ready.go:92] pod "coredns-7db6d8ff4d-jw77g" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:17.898684   22116 pod_ready.go:81] duration metric: took 6.693315ms for pod "coredns-7db6d8ff4d-jw77g" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.898698   22116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xr965" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.898744   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xr965
	I0520 10:42:17.898751   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.898758   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.898763   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.900803   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:17.901502   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:17.901518   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.901526   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.901530   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.904147   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:17.904592   22116 pod_ready.go:92] pod "coredns-7db6d8ff4d-xr965" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:17.904608   22116 pod_ready.go:81] duration metric: took 5.89957ms for pod "coredns-7db6d8ff4d-xr965" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.904616   22116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.904660   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850
	I0520 10:42:17.904667   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.904674   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.904680   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.907493   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:17.908403   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:17.908417   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.908427   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.908435   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.910798   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:17.911306   22116 pod_ready.go:92] pod "etcd-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:17.911318   22116 pod_ready.go:81] duration metric: took 6.695204ms for pod "etcd-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.911328   22116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.911378   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m02
	I0520 10:42:17.911385   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.911392   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.911398   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.914871   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:17.915706   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:17.915719   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:17.915725   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:17.915728   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:17.919680   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:17.920275   22116 pod_ready.go:92] pod "etcd-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:17.920292   22116 pod_ready.go:81] duration metric: took 8.952781ms for pod "etcd-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:17.920302   22116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:18.075661   22116 request.go:629] Waited for 155.260393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:18.075737   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:18.075744   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:18.075754   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:18.075758   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:18.082837   22116 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0520 10:42:18.275867   22116 request.go:629] Waited for 192.36193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:18.275942   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:18.275948   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:18.275954   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:18.275959   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:18.278932   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:18.476476   22116 request.go:629] Waited for 55.217948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:18.476542   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:18.476549   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:18.476560   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:18.476567   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:18.480118   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:18.676488   22116 request.go:629] Waited for 195.356961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:18.676542   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:18.676547   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:18.676554   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:18.676559   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:18.680097   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:18.920954   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:18.920976   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:18.920987   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:18.920994   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:18.925668   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:42:19.076051   22116 request.go:629] Waited for 149.304361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:19.076134   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:19.076145   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:19.076169   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:19.076180   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:19.079433   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:19.421354   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:19.421371   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:19.421379   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:19.421383   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:19.424148   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:19.476066   22116 request.go:629] Waited for 51.205993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:19.476138   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:19.476143   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:19.476150   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:19.476154   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:19.478974   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:19.920652   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:19.920673   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:19.920680   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:19.920684   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:19.925428   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:42:19.926242   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:19.926254   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:19.926261   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:19.926266   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:19.929319   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:19.929956   22116 pod_ready.go:102] pod "etcd-ha-570850-m03" in "kube-system" namespace has status "Ready":"False"
	I0520 10:42:20.421494   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:20.421515   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:20.421522   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:20.421526   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:20.424810   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:20.425540   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:20.425555   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:20.425564   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:20.425571   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:20.428248   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:20.920565   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:20.920595   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:20.920606   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:20.920613   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:20.924245   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:20.924763   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:20.924776   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:20.924784   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:20.924789   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:20.927558   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:21.420634   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/etcd-ha-570850-m03
	I0520 10:42:21.420653   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:21.420661   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:21.420665   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:21.423882   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:21.424893   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:21.424906   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:21.424913   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:21.424916   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:21.427316   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:21.427818   22116 pod_ready.go:92] pod "etcd-ha-570850-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:21.427835   22116 pod_ready.go:81] duration metric: took 3.507526021s for pod "etcd-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:21.427851   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:21.427904   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850
	I0520 10:42:21.427913   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:21.427919   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:21.427923   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:21.430361   22116 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0520 10:42:21.476149   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:21.476175   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:21.476198   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:21.476204   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:21.479343   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:21.479822   22116 pod_ready.go:92] pod "kube-apiserver-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:21.479841   22116 pod_ready.go:81] duration metric: took 51.984111ms for pod "kube-apiserver-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:21.479851   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:21.676270   22116 request.go:629] Waited for 196.359456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850-m02
	I0520 10:42:21.676318   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850-m02
	I0520 10:42:21.676323   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:21.676330   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:21.676336   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:21.679708   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:21.876186   22116 request.go:629] Waited for 195.758263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:21.876239   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:21.876246   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:21.876255   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:21.876261   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:21.879504   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:21.880203   22116 pod_ready.go:92] pod "kube-apiserver-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:21.880221   22116 pod_ready.go:81] duration metric: took 400.36329ms for pod "kube-apiserver-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:21.880234   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:22.075888   22116 request.go:629] Waited for 195.580244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850-m03
	I0520 10:42:22.075950   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850-m03
	I0520 10:42:22.075955   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:22.075962   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:22.075967   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:22.079392   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:22.275529   22116 request.go:629] Waited for 195.269451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:22.275580   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:22.275584   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:22.275591   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:22.275599   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:22.279161   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:22.279938   22116 pod_ready.go:92] pod "kube-apiserver-ha-570850-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:22.279956   22116 pod_ready.go:81] duration metric: took 399.71616ms for pod "kube-apiserver-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:22.279965   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:22.476118   22116 request.go:629] Waited for 196.071975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850
	I0520 10:42:22.476169   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850
	I0520 10:42:22.476174   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:22.476181   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:22.476186   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:22.479789   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:22.676418   22116 request.go:629] Waited for 195.356393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:22.676485   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:22.676504   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:22.676511   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:22.676517   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:22.679997   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:22.680575   22116 pod_ready.go:92] pod "kube-controller-manager-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:22.680591   22116 pod_ready.go:81] duration metric: took 400.62041ms for pod "kube-controller-manager-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:22.680601   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:22.875715   22116 request.go:629] Waited for 195.028483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:42:22.875767   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m02
	I0520 10:42:22.875772   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:22.875779   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:22.875784   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:22.879029   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:23.075963   22116 request.go:629] Waited for 196.246261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:23.076029   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:23.076035   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:23.076042   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:23.076048   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:23.079726   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:23.080269   22116 pod_ready.go:92] pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:23.080287   22116 pod_ready.go:81] duration metric: took 399.678793ms for pod "kube-controller-manager-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:23.080300   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:23.276421   22116 request.go:629] Waited for 196.059923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m03
	I0520 10:42:23.276493   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-570850-m03
	I0520 10:42:23.276500   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:23.276510   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:23.276514   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:23.280619   22116 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0520 10:42:23.476322   22116 request.go:629] Waited for 194.727822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:23.476487   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:23.476506   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:23.476526   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:23.476541   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:23.481814   22116 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0520 10:42:23.482517   22116 pod_ready.go:92] pod "kube-controller-manager-ha-570850-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:23.482537   22116 pod_ready.go:81] duration metric: took 402.22974ms for pod "kube-controller-manager-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:23.482547   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6s5x7" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:23.675503   22116 request.go:629] Waited for 192.874541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6s5x7
	I0520 10:42:23.675578   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6s5x7
	I0520 10:42:23.675590   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:23.675602   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:23.675610   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:23.679052   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:23.876283   22116 request.go:629] Waited for 196.358646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:23.876345   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:23.876350   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:23.876357   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:23.876365   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:23.879811   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:23.880538   22116 pod_ready.go:92] pod "kube-proxy-6s5x7" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:23.880554   22116 pod_ready.go:81] duration metric: took 398.001639ms for pod "kube-proxy-6s5x7" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:23.880563   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dnhm6" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:24.075774   22116 request.go:629] Waited for 195.155417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnhm6
	I0520 10:42:24.075856   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dnhm6
	I0520 10:42:24.075863   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:24.075877   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:24.075884   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:24.079417   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:24.276401   22116 request.go:629] Waited for 196.280633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:24.276487   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:24.276498   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:24.276508   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:24.276516   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:24.279947   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:24.280426   22116 pod_ready.go:92] pod "kube-proxy-dnhm6" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:24.280443   22116 pod_ready.go:81] duration metric: took 399.874659ms for pod "kube-proxy-dnhm6" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:24.280452   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n5std" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:24.475518   22116 request.go:629] Waited for 194.997317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n5std
	I0520 10:42:24.475585   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n5std
	I0520 10:42:24.475593   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:24.475604   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:24.475610   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:24.479214   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:24.675964   22116 request.go:629] Waited for 195.834355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:24.676046   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:24.676056   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:24.676068   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:24.676078   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:24.679964   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:24.680600   22116 pod_ready.go:92] pod "kube-proxy-n5std" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:24.680622   22116 pod_ready.go:81] duration metric: took 400.162798ms for pod "kube-proxy-n5std" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:24.680634   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:24.875566   22116 request.go:629] Waited for 194.840434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850
	I0520 10:42:24.875629   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850
	I0520 10:42:24.875637   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:24.875651   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:24.875660   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:24.879192   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:25.076297   22116 request.go:629] Waited for 196.375594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:25.076358   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850
	I0520 10:42:25.076363   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:25.076370   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:25.076375   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:25.080140   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:25.080965   22116 pod_ready.go:92] pod "kube-scheduler-ha-570850" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:25.080999   22116 pod_ready.go:81] duration metric: took 400.357085ms for pod "kube-scheduler-ha-570850" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:25.081014   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:25.276429   22116 request.go:629] Waited for 195.341783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850-m02
	I0520 10:42:25.276508   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850-m02
	I0520 10:42:25.276519   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:25.276529   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:25.276536   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:25.280412   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:25.475472   22116 request.go:629] Waited for 194.28044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:25.475547   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02
	I0520 10:42:25.475552   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:25.475559   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:25.475563   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:25.479286   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:25.480175   22116 pod_ready.go:92] pod "kube-scheduler-ha-570850-m02" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:25.480216   22116 pod_ready.go:81] duration metric: took 399.186583ms for pod "kube-scheduler-ha-570850-m02" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:25.480252   22116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:25.675498   22116 request.go:629] Waited for 195.101909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850-m03
	I0520 10:42:25.675549   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-570850-m03
	I0520 10:42:25.675555   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:25.675564   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:25.675573   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:25.679043   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:25.876224   22116 request.go:629] Waited for 196.274957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:25.876289   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03
	I0520 10:42:25.876297   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:25.876305   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:25.876310   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:25.879394   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:25.880015   22116 pod_ready.go:92] pod "kube-scheduler-ha-570850-m03" in "kube-system" namespace has status "Ready":"True"
	I0520 10:42:25.880032   22116 pod_ready.go:81] duration metric: took 399.765902ms for pod "kube-scheduler-ha-570850-m03" in "kube-system" namespace to be "Ready" ...
	I0520 10:42:25.880046   22116 pod_ready.go:38] duration metric: took 8.000678783s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:42:25.880074   22116 api_server.go:52] waiting for apiserver process to appear ...
	I0520 10:42:25.880137   22116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:42:25.897431   22116 api_server.go:72] duration metric: took 15.314090229s to wait for apiserver process to appear ...
	I0520 10:42:25.897451   22116 api_server.go:88] waiting for apiserver healthz status ...
	I0520 10:42:25.897480   22116 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0520 10:42:25.903203   22116 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I0520 10:42:25.903263   22116 round_trippers.go:463] GET https://192.168.39.152:8443/version
	I0520 10:42:25.903273   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:25.903284   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:25.903292   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:25.904216   22116 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0520 10:42:25.904273   22116 api_server.go:141] control plane version: v1.30.1
	I0520 10:42:25.904293   22116 api_server.go:131] duration metric: took 6.834697ms to wait for apiserver health ...
	I0520 10:42:25.904304   22116 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 10:42:26.075691   22116 request.go:629] Waited for 171.30407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:42:26.075784   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:42:26.075792   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:26.075808   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:26.075818   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:26.081925   22116 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0520 10:42:26.089411   22116 system_pods.go:59] 24 kube-system pods found
	I0520 10:42:26.089447   22116 system_pods.go:61] "coredns-7db6d8ff4d-jw77g" [85653365-561a-4097-bda9-77121fd12b00] Running
	I0520 10:42:26.089453   22116 system_pods.go:61] "coredns-7db6d8ff4d-xr965" [e23a9ea0-3c6a-405a-9e09-d814ca8adeb0] Running
	I0520 10:42:26.089459   22116 system_pods.go:61] "etcd-ha-570850" [552d2142-ba50-4203-a6a0-4825c7cd9a5f] Running
	I0520 10:42:26.089463   22116 system_pods.go:61] "etcd-ha-570850-m02" [67bcc993-83b3-4331-86fe-dcc527947352] Running
	I0520 10:42:26.089467   22116 system_pods.go:61] "etcd-ha-570850-m03" [76f81192-b8f1-4c54-addc-c7d53b97d786] Running
	I0520 10:42:26.089472   22116 system_pods.go:61] "kindnet-2zphj" [2ddea332-9ab2-445d-a26e-148ad8cf9a77] Running
	I0520 10:42:26.089477   22116 system_pods.go:61] "kindnet-6tkd5" [44572e22-dff4-4d00-b3ed-0e3e4b9a3b6c] Running
	I0520 10:42:26.089483   22116 system_pods.go:61] "kindnet-f5x4s" [7e4a78b0-a069-4013-a80e-6b0fee4c4f8e] Running
	I0520 10:42:26.089489   22116 system_pods.go:61] "kube-apiserver-ha-570850" [1ac0940a-3739-40ee-80f0-77b95690c907] Running
	I0520 10:42:26.089496   22116 system_pods.go:61] "kube-apiserver-ha-570850-m02" [8aa06b87-752a-427f-b2b8-504f609e3151] Running
	I0520 10:42:26.089502   22116 system_pods.go:61] "kube-apiserver-ha-570850-m03" [114f6328-9773-418b-b09f-f8ac2c250750] Running
	I0520 10:42:26.089511   22116 system_pods.go:61] "kube-controller-manager-ha-570850" [b5e7e18e-92bf-4566-8cc0-4333ee2d12aa] Running
	I0520 10:42:26.089516   22116 system_pods.go:61] "kube-controller-manager-ha-570850-m02" [796d4ac0-bf9d-431e-8c1d-f31dde806499] Running
	I0520 10:42:26.089522   22116 system_pods.go:61] "kube-controller-manager-ha-570850-m03" [f8a61190-3598-4db2-8c63-726463c84b8e] Running
	I0520 10:42:26.089530   22116 system_pods.go:61] "kube-proxy-6s5x7" [424c7748-985a-407a-b778-d57c97016c68] Running
	I0520 10:42:26.089535   22116 system_pods.go:61] "kube-proxy-dnhm6" [3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1] Running
	I0520 10:42:26.089541   22116 system_pods.go:61] "kube-proxy-n5std" [226b707e-c7ba-45c6-814e-a50a2619ea3b] Running
	I0520 10:42:26.089546   22116 system_pods.go:61] "kube-scheduler-ha-570850" [0049e4d9-37d1-41ad-aad5-dbdc12c1af01] Running
	I0520 10:42:26.089551   22116 system_pods.go:61] "kube-scheduler-ha-570850-m02" [b2302685-93c6-477b-b0c5-d3cdb9c6fbc9] Running
	I0520 10:42:26.089557   22116 system_pods.go:61] "kube-scheduler-ha-570850-m03" [803f0fcb-7bfb-420e-b033-991d63d08650] Running
	I0520 10:42:26.089562   22116 system_pods.go:61] "kube-vip-ha-570850" [f230fe6d-0d6e-4a3b-bbf8-694d256f1fde] Running
	I0520 10:42:26.089567   22116 system_pods.go:61] "kube-vip-ha-570850-m02" [07f5a1d4-f793-4417-bec6-5d9eab498365] Running
	I0520 10:42:26.089572   22116 system_pods.go:61] "kube-vip-ha-570850-m03" [2367fb03-581a-4f3b-b9c1-63ace73a3924] Running
	I0520 10:42:26.089578   22116 system_pods.go:61] "storage-provisioner" [8cbdeaca-631d-449f-bb63-9ee4ec3f7a36] Running
	I0520 10:42:26.089585   22116 system_pods.go:74] duration metric: took 185.274444ms to wait for pod list to return data ...
	I0520 10:42:26.089602   22116 default_sa.go:34] waiting for default service account to be created ...
	I0520 10:42:26.276065   22116 request.go:629] Waited for 186.377447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I0520 10:42:26.276153   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/default/serviceaccounts
	I0520 10:42:26.276164   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:26.276174   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:26.276181   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:26.279283   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:26.279425   22116 default_sa.go:45] found service account: "default"
	I0520 10:42:26.279446   22116 default_sa.go:55] duration metric: took 189.837111ms for default service account to be created ...
	I0520 10:42:26.279457   22116 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 10:42:26.475661   22116 request.go:629] Waited for 196.140142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:42:26.475756   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/namespaces/kube-system/pods
	I0520 10:42:26.475769   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:26.475781   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:26.475787   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:26.485226   22116 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0520 10:42:26.492589   22116 system_pods.go:86] 24 kube-system pods found
	I0520 10:42:26.492625   22116 system_pods.go:89] "coredns-7db6d8ff4d-jw77g" [85653365-561a-4097-bda9-77121fd12b00] Running
	I0520 10:42:26.492640   22116 system_pods.go:89] "coredns-7db6d8ff4d-xr965" [e23a9ea0-3c6a-405a-9e09-d814ca8adeb0] Running
	I0520 10:42:26.492647   22116 system_pods.go:89] "etcd-ha-570850" [552d2142-ba50-4203-a6a0-4825c7cd9a5f] Running
	I0520 10:42:26.492653   22116 system_pods.go:89] "etcd-ha-570850-m02" [67bcc993-83b3-4331-86fe-dcc527947352] Running
	I0520 10:42:26.492678   22116 system_pods.go:89] "etcd-ha-570850-m03" [76f81192-b8f1-4c54-addc-c7d53b97d786] Running
	I0520 10:42:26.492684   22116 system_pods.go:89] "kindnet-2zphj" [2ddea332-9ab2-445d-a26e-148ad8cf9a77] Running
	I0520 10:42:26.492696   22116 system_pods.go:89] "kindnet-6tkd5" [44572e22-dff4-4d00-b3ed-0e3e4b9a3b6c] Running
	I0520 10:42:26.492702   22116 system_pods.go:89] "kindnet-f5x4s" [7e4a78b0-a069-4013-a80e-6b0fee4c4f8e] Running
	I0520 10:42:26.492708   22116 system_pods.go:89] "kube-apiserver-ha-570850" [1ac0940a-3739-40ee-80f0-77b95690c907] Running
	I0520 10:42:26.492714   22116 system_pods.go:89] "kube-apiserver-ha-570850-m02" [8aa06b87-752a-427f-b2b8-504f609e3151] Running
	I0520 10:42:26.492720   22116 system_pods.go:89] "kube-apiserver-ha-570850-m03" [114f6328-9773-418b-b09f-f8ac2c250750] Running
	I0520 10:42:26.492732   22116 system_pods.go:89] "kube-controller-manager-ha-570850" [b5e7e18e-92bf-4566-8cc0-4333ee2d12aa] Running
	I0520 10:42:26.492738   22116 system_pods.go:89] "kube-controller-manager-ha-570850-m02" [796d4ac0-bf9d-431e-8c1d-f31dde806499] Running
	I0520 10:42:26.492745   22116 system_pods.go:89] "kube-controller-manager-ha-570850-m03" [f8a61190-3598-4db2-8c63-726463c84b8e] Running
	I0520 10:42:26.492751   22116 system_pods.go:89] "kube-proxy-6s5x7" [424c7748-985a-407a-b778-d57c97016c68] Running
	I0520 10:42:26.492757   22116 system_pods.go:89] "kube-proxy-dnhm6" [3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1] Running
	I0520 10:42:26.492767   22116 system_pods.go:89] "kube-proxy-n5std" [226b707e-c7ba-45c6-814e-a50a2619ea3b] Running
	I0520 10:42:26.492773   22116 system_pods.go:89] "kube-scheduler-ha-570850" [0049e4d9-37d1-41ad-aad5-dbdc12c1af01] Running
	I0520 10:42:26.492779   22116 system_pods.go:89] "kube-scheduler-ha-570850-m02" [b2302685-93c6-477b-b0c5-d3cdb9c6fbc9] Running
	I0520 10:42:26.492785   22116 system_pods.go:89] "kube-scheduler-ha-570850-m03" [803f0fcb-7bfb-420e-b033-991d63d08650] Running
	I0520 10:42:26.492792   22116 system_pods.go:89] "kube-vip-ha-570850" [f230fe6d-0d6e-4a3b-bbf8-694d256f1fde] Running
	I0520 10:42:26.492801   22116 system_pods.go:89] "kube-vip-ha-570850-m02" [07f5a1d4-f793-4417-bec6-5d9eab498365] Running
	I0520 10:42:26.492808   22116 system_pods.go:89] "kube-vip-ha-570850-m03" [2367fb03-581a-4f3b-b9c1-63ace73a3924] Running
	I0520 10:42:26.492813   22116 system_pods.go:89] "storage-provisioner" [8cbdeaca-631d-449f-bb63-9ee4ec3f7a36] Running
	I0520 10:42:26.492821   22116 system_pods.go:126] duration metric: took 213.358057ms to wait for k8s-apps to be running ...
	I0520 10:42:26.492841   22116 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 10:42:26.492897   22116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:42:26.509575   22116 system_svc.go:56] duration metric: took 16.732603ms WaitForService to wait for kubelet
	I0520 10:42:26.509612   22116 kubeadm.go:576] duration metric: took 15.926273867s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:42:26.509649   22116 node_conditions.go:102] verifying NodePressure condition ...
	I0520 10:42:26.676488   22116 request.go:629] Waited for 166.752601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.152:8443/api/v1/nodes
	I0520 10:42:26.676543   22116 round_trippers.go:463] GET https://192.168.39.152:8443/api/v1/nodes
	I0520 10:42:26.676548   22116 round_trippers.go:469] Request Headers:
	I0520 10:42:26.676555   22116 round_trippers.go:473]     Accept: application/json, */*
	I0520 10:42:26.676561   22116 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0520 10:42:26.680442   22116 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0520 10:42:26.681450   22116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 10:42:26.681483   22116 node_conditions.go:123] node cpu capacity is 2
	I0520 10:42:26.681513   22116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 10:42:26.681519   22116 node_conditions.go:123] node cpu capacity is 2
	I0520 10:42:26.681525   22116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 10:42:26.681534   22116 node_conditions.go:123] node cpu capacity is 2
	I0520 10:42:26.681540   22116 node_conditions.go:105] duration metric: took 171.884501ms to run NodePressure ...
	I0520 10:42:26.681554   22116 start.go:240] waiting for startup goroutines ...
	I0520 10:42:26.681579   22116 start.go:254] writing updated cluster config ...
	I0520 10:42:26.681875   22116 ssh_runner.go:195] Run: rm -f paused
	I0520 10:42:26.733767   22116 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 10:42:26.735135   22116 out.go:177] * Done! kubectl is now configured to use "ha-570850" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.852968958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202016852939719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82247178-56ed-4648-838b-dd4163ff49b9 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.853529317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e9114a2-eedd-4b33-82e8-093fe4d942c1 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.853606479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e9114a2-eedd-4b33-82e8-093fe4d942c1 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.853907558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716201751816839075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dfd2933bae47652d85545b3c50f13118d2b9b80e0d182e41b38e5e1fab415ee,PodSandboxId:f700be37dc7a1d68f57eb08616e09cba5cb0bbf5d9f6ac08734620e3207a2d76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716201549339729413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548128143135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548090934743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6
a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141,PodSandboxId:f62741f19bc3771ba99dc80f66de33213cdb68695cdc53def1cee95ba5ab4060,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171620154
6646739550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716201546520866633,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5497e7ab0c98629fad1acf76af134ab8564cfc623f5403e153cfa0fb0f8a2853,PodSandboxId:c9c86ebe9b6430011dbd401a7e401b7cd6094128450d6805c2c9976c9d1d1ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716201529346241235,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aa10689e272e459ef8442761a741fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716201526502165946,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716201526516945329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68,PodSandboxId:57ea16243b58bca77d2577b5bcd0c198202ae885abd76ae34565650091fd59ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716201526475253816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab,PodSandboxId:6f83c307ff91489300a8e5c21aaaaebc6a9947b6427a186e299c9c2f42a47bd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716201526464338615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e9114a2-eedd-4b33-82e8-093fe4d942c1 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.895971783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1280ca1-d1c6-468e-ace0-80c44eaed3ad name=/runtime.v1.RuntimeService/Version
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.896080794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1280ca1-d1c6-468e-ace0-80c44eaed3ad name=/runtime.v1.RuntimeService/Version
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.898064640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=156460c1-48b7-4eaf-940e-966f0ecfbf1e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.898460219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202016898437512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=156460c1-48b7-4eaf-940e-966f0ecfbf1e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.899034294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f9dae43-07e5-43b8-b264-234438567380 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.899089255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f9dae43-07e5-43b8-b264-234438567380 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.899320551Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716201751816839075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dfd2933bae47652d85545b3c50f13118d2b9b80e0d182e41b38e5e1fab415ee,PodSandboxId:f700be37dc7a1d68f57eb08616e09cba5cb0bbf5d9f6ac08734620e3207a2d76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716201549339729413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548128143135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548090934743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6
a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141,PodSandboxId:f62741f19bc3771ba99dc80f66de33213cdb68695cdc53def1cee95ba5ab4060,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171620154
6646739550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716201546520866633,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5497e7ab0c98629fad1acf76af134ab8564cfc623f5403e153cfa0fb0f8a2853,PodSandboxId:c9c86ebe9b6430011dbd401a7e401b7cd6094128450d6805c2c9976c9d1d1ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716201529346241235,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aa10689e272e459ef8442761a741fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716201526502165946,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716201526516945329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68,PodSandboxId:57ea16243b58bca77d2577b5bcd0c198202ae885abd76ae34565650091fd59ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716201526475253816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab,PodSandboxId:6f83c307ff91489300a8e5c21aaaaebc6a9947b6427a186e299c9c2f42a47bd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716201526464338615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f9dae43-07e5-43b8-b264-234438567380 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.948807293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d63d7efd-7622-4c02-adc7-34a28dc17813 name=/runtime.v1.RuntimeService/Version
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.948875672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d63d7efd-7622-4c02-adc7-34a28dc17813 name=/runtime.v1.RuntimeService/Version
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.950382857Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f94ea649-a600-4a0b-8821-f6fa3960d0e3 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.950922270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202016950898686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f94ea649-a600-4a0b-8821-f6fa3960d0e3 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.951705452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b3ca1fe-d9a7-4b11-94ae-51d2417546c5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.951781813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b3ca1fe-d9a7-4b11-94ae-51d2417546c5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.952047108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716201751816839075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dfd2933bae47652d85545b3c50f13118d2b9b80e0d182e41b38e5e1fab415ee,PodSandboxId:f700be37dc7a1d68f57eb08616e09cba5cb0bbf5d9f6ac08734620e3207a2d76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716201549339729413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548128143135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548090934743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6
a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141,PodSandboxId:f62741f19bc3771ba99dc80f66de33213cdb68695cdc53def1cee95ba5ab4060,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171620154
6646739550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716201546520866633,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5497e7ab0c98629fad1acf76af134ab8564cfc623f5403e153cfa0fb0f8a2853,PodSandboxId:c9c86ebe9b6430011dbd401a7e401b7cd6094128450d6805c2c9976c9d1d1ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716201529346241235,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aa10689e272e459ef8442761a741fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716201526502165946,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716201526516945329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68,PodSandboxId:57ea16243b58bca77d2577b5bcd0c198202ae885abd76ae34565650091fd59ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716201526475253816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab,PodSandboxId:6f83c307ff91489300a8e5c21aaaaebc6a9947b6427a186e299c9c2f42a47bd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716201526464338615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b3ca1fe-d9a7-4b11-94ae-51d2417546c5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.993295856Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5a40cdc-717c-47d4-91e9-b5ddf97950e1 name=/runtime.v1.RuntimeService/Version
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.993408146Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5a40cdc-717c-47d4-91e9-b5ddf97950e1 name=/runtime.v1.RuntimeService/Version
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.994976457Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a1da277-315d-4400-9e4f-150319a45698 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.995369534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202016995350075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a1da277-315d-4400-9e4f-150319a45698 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.996107584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbeeab13-d746-4258-b6ce-1c6315e16afa name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.996158584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbeeab13-d746-4258-b6ce-1c6315e16afa name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:46:56 ha-570850 crio[684]: time="2024-05-20 10:46:56.996394604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716201751816839075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dfd2933bae47652d85545b3c50f13118d2b9b80e0d182e41b38e5e1fab415ee,PodSandboxId:f700be37dc7a1d68f57eb08616e09cba5cb0bbf5d9f6ac08734620e3207a2d76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716201549339729413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548128143135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716201548090934743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6
a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141,PodSandboxId:f62741f19bc3771ba99dc80f66de33213cdb68695cdc53def1cee95ba5ab4060,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171620154
6646739550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716201546520866633,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5497e7ab0c98629fad1acf76af134ab8564cfc623f5403e153cfa0fb0f8a2853,PodSandboxId:c9c86ebe9b6430011dbd401a7e401b7cd6094128450d6805c2c9976c9d1d1ac9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716201529346241235,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aa10689e272e459ef8442761a741fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716201526502165946,Labels:map[string]string{io.kubernetes.container.name: kube
-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716201526516945329,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-h
a-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68,PodSandboxId:57ea16243b58bca77d2577b5bcd0c198202ae885abd76ae34565650091fd59ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716201526475253816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab,PodSandboxId:6f83c307ff91489300a8e5c21aaaaebc6a9947b6427a186e299c9c2f42a47bd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716201526464338615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbeeab13-d746-4258-b6ce-1c6315e16afa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ff013665ebf9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   1b009be6473f8       busybox-fc5497c4f-q9r5v
	5dfd2933bae47       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   f700be37dc7a1       storage-provisioner
	90d27af32d37b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   7f39567bf239f       coredns-7db6d8ff4d-jw77g
	db09b659f2af2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   5862a2beb755c       coredns-7db6d8ff4d-xr965
	501666a770a6d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   f62741f19bc37       kindnet-2zphj
	2ab644050dd47       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago       Running             kube-proxy                0                   17d9c116389c0       kube-proxy-dnhm6
	5497e7ab0c986       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   c9c86ebe9b643       kube-vip-ha-570850
	3ed532de1dd4e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   6e5a2b4300b4d       etcd-ha-570850
	5f81e882715f6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      8 minutes ago       Running             kube-scheduler            0                   760d0e6343aaf       kube-scheduler-ha-570850
	43a07bf3d1ba6       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      8 minutes ago       Running             kube-controller-manager   0                   57ea16243b58b       kube-controller-manager-ha-570850
	25ac8a29b1819       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      8 minutes ago       Running             kube-apiserver            0                   6f83c307ff914       kube-apiserver-ha-570850
	
	
	==> coredns [90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d] <==
	[INFO] 10.244.0.4:37188 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125121s
	[INFO] 10.244.0.4:45557 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011117s
	[INFO] 10.244.1.2:55262 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001714739s
	[INFO] 10.244.1.2:48903 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120367s
	[INFO] 10.244.1.2:57868 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001671078s
	[INFO] 10.244.1.2:56995 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082609s
	[INFO] 10.244.2.2:58942 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706357s
	[INFO] 10.244.2.2:35126 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090874s
	[INFO] 10.244.2.2:37909 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008474s
	[INFO] 10.244.2.2:51462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166312s
	[INFO] 10.244.2.2:33520 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093626s
	[INFO] 10.244.0.4:59833 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088933s
	[INFO] 10.244.1.2:59628 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093196s
	[INFO] 10.244.1.2:50071 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124886s
	[INFO] 10.244.1.2:42327 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143875s
	[INFO] 10.244.0.4:35557 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115628s
	[INFO] 10.244.0.4:36441 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148453s
	[INFO] 10.244.0.4:52951 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110826s
	[INFO] 10.244.0.4:41170 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075418s
	[INFO] 10.244.1.2:54236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124156s
	[INFO] 10.244.1.2:55716 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008044s
	[INFO] 10.244.1.2:51672 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128329s
	[INFO] 10.244.2.2:44501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195618s
	[INFO] 10.244.2.2:48083 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105613s
	[INFO] 10.244.2.2:41793 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131302s
	
	
	==> coredns [db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c] <==
	[INFO] 10.244.1.2:55952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176159s
	[INFO] 10.244.1.2:40232 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000152961s
	[INFO] 10.244.1.2:39961 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001702696s
	[INFO] 10.244.2.2:55319 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001364377s
	[INFO] 10.244.2.2:57122 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001355331s
	[INFO] 10.244.0.4:59866 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156011s
	[INFO] 10.244.0.4:56885 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009797641s
	[INFO] 10.244.0.4:37828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138434s
	[INFO] 10.244.1.2:59957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100136s
	[INFO] 10.244.1.2:41914 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180125s
	[INFO] 10.244.1.2:38547 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077887s
	[INFO] 10.244.1.2:35980 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172936s
	[INFO] 10.244.2.2:50204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019701s
	[INFO] 10.244.2.2:42765 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076194s
	[INFO] 10.244.2.2:44033 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001282105s
	[INFO] 10.244.0.4:49359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121025s
	[INFO] 10.244.0.4:41717 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077487s
	[INFO] 10.244.0.4:44192 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062315s
	[INFO] 10.244.1.2:40081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176555s
	[INFO] 10.244.2.2:58201 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150016s
	[INFO] 10.244.2.2:56308 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145342s
	[INFO] 10.244.2.2:36926 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093685s
	[INFO] 10.244.2.2:42968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009681s
	[INFO] 10.244.1.2:34643 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154857s
	[INFO] 10.244.2.2:49012 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281845s
	
	
	==> describe nodes <==
	Name:               ha-570850
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T10_38_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:38:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:46:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:42:57 +0000   Mon, 20 May 2024 10:38:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:42:57 +0000   Mon, 20 May 2024 10:38:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:42:57 +0000   Mon, 20 May 2024 10:38:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:42:57 +0000   Mon, 20 May 2024 10:39:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-570850
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 328136f6dc1d4de39599ea377c28404e
	  System UUID:                328136f6-dc1d-4de3-9599-ea377c28404e
	  Boot ID:                    bc8aa620-b931-4636-9be0-cbe2c643d897
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q9r5v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 coredns-7db6d8ff4d-jw77g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m52s
	  kube-system                 coredns-7db6d8ff4d-xr965             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m52s
	  kube-system                 etcd-ha-570850                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m5s
	  kube-system                 kindnet-2zphj                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m52s
	  kube-system                 kube-apiserver-ha-570850             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-controller-manager-ha-570850    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-proxy-dnhm6                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m52s
	  kube-system                 kube-scheduler-ha-570850             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-vip-ha-570850                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m50s  kube-proxy       
	  Normal  Starting                 8m5s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m5s   kubelet          Node ha-570850 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m5s   kubelet          Node ha-570850 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m5s   kubelet          Node ha-570850 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m52s  node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Normal  NodeReady                7m50s  kubelet          Node ha-570850 status is now: NodeReady
	  Normal  RegisteredNode           5m40s  node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Normal  RegisteredNode           4m32s  node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	
	
	Name:               ha-570850-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T10_41_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:40:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:43:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 10:43:01 +0000   Mon, 20 May 2024 10:44:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 10:43:01 +0000   Mon, 20 May 2024 10:44:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 10:43:01 +0000   Mon, 20 May 2024 10:44:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 10:43:01 +0000   Mon, 20 May 2024 10:44:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-570850-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 679a56af7395441fb77f57941486fb9b
	  System UUID:                679a56af-7395-441f-b77f-57941486fb9b
	  Boot ID:                    354f7a3a-30e0-4c1e-b310-a2f305dc251d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wpv4q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-570850-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m56s
	  kube-system                 kindnet-f5x4s                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m58s
	  kube-system                 kube-apiserver-ha-570850-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-controller-manager-ha-570850-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-proxy-6s5x7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-scheduler-ha-570850-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 kube-vip-ha-570850-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet          Node ha-570850-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x8 over 5m58s)  kubelet          Node ha-570850-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x7 over 5m58s)  kubelet          Node ha-570850-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m57s                  node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  RegisteredNode           5m40s                  node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  RegisteredNode           4m32s                  node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  NodeNotReady             2m42s                  node-controller  Node ha-570850-m02 status is now: NodeNotReady
	
	
	Name:               ha-570850-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T10_42_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:42:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:46:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:42:37 +0000   Mon, 20 May 2024 10:42:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:42:37 +0000   Mon, 20 May 2024 10:42:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:42:37 +0000   Mon, 20 May 2024 10:42:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:42:37 +0000   Mon, 20 May 2024 10:42:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    ha-570850-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dbb5ce07c2d4d77b22b999a94184ed3
	  System UUID:                1dbb5ce0-7c2d-4d77-b22b-999a94184ed3
	  Boot ID:                    170869d7-c32e-447c-b434-e7c7bb44441c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-x2wtn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-570850-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m49s
	  kube-system                 kindnet-6tkd5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m51s
	  kube-system                 kube-apiserver-ha-570850-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-controller-manager-ha-570850-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 kube-proxy-n5std                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-scheduler-ha-570850-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-vip-ha-570850-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m51s (x8 over 4m51s)  kubelet          Node ha-570850-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m51s (x8 over 4m51s)  kubelet          Node ha-570850-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m51s (x7 over 4m51s)  kubelet          Node ha-570850-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m50s                  node-controller  Node ha-570850-m03 event: Registered Node ha-570850-m03 in Controller
	  Normal  RegisteredNode           4m47s                  node-controller  Node ha-570850-m03 event: Registered Node ha-570850-m03 in Controller
	  Normal  RegisteredNode           4m32s                  node-controller  Node ha-570850-m03 event: Registered Node ha-570850-m03 in Controller
	
	
	Name:               ha-570850-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T10_43_09_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:43:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:46:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:43:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:43:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:43:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:43:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    ha-570850-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14f2f8926106479fbb09b362a7c92c83
	  System UUID:                14f2f892-6106-479f-bb09-b362a7c92c83
	  Boot ID:                    2b747312-6091-4a44-bf3d-81557c80cae1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-p9s2j       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m49s
	  kube-system                 kube-proxy-xt8ht    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m49s (x2 over 3m49s)  kubelet          Node ha-570850-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x2 over 3m49s)  kubelet          Node ha-570850-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x2 over 3m49s)  kubelet          Node ha-570850-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  NodeReady                3m38s                  kubelet          Node ha-570850-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May20 10:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051736] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039898] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.514141] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.438828] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.611927] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.905313] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.066522] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063920] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.182694] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.123506] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.262222] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.187463] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +4.199064] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.059504] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.313134] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.088050] kauditd_printk_skb: 79 callbacks suppressed
	[May20 10:39] kauditd_printk_skb: 21 callbacks suppressed
	[May20 10:41] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d] <==
	{"level":"warn","ts":"2024-05-20T10:46:57.255582Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.270259Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.276268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.286231Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.292932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.296597Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.302327Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.306706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.310247Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.318247Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.323612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.329204Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.33413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.33576Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.339555Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.346411Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.355002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.361024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.364308Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.36715Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.372009Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.378569Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.384717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.392799Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:46:57.419094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:46:57 up 8 min,  0 users,  load average: 0.38, 0.30, 0.16
	Linux ha-570850 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141] <==
	I0520 10:46:18.087384       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	I0520 10:46:28.094181       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0520 10:46:28.094224       1 main.go:227] handling current node
	I0520 10:46:28.094234       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0520 10:46:28.094239       1 main.go:250] Node ha-570850-m02 has CIDR [10.244.1.0/24] 
	I0520 10:46:28.094354       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0520 10:46:28.094379       1 main.go:250] Node ha-570850-m03 has CIDR [10.244.2.0/24] 
	I0520 10:46:28.094425       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0520 10:46:28.094448       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	I0520 10:46:38.110531       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0520 10:46:38.110570       1 main.go:227] handling current node
	I0520 10:46:38.110580       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0520 10:46:38.110586       1 main.go:250] Node ha-570850-m02 has CIDR [10.244.1.0/24] 
	I0520 10:46:38.111848       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0520 10:46:38.111994       1 main.go:250] Node ha-570850-m03 has CIDR [10.244.2.0/24] 
	I0520 10:46:38.113177       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0520 10:46:38.113212       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	I0520 10:46:48.130078       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0520 10:46:48.130119       1 main.go:227] handling current node
	I0520 10:46:48.130131       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0520 10:46:48.130136       1 main.go:250] Node ha-570850-m02 has CIDR [10.244.1.0/24] 
	I0520 10:46:48.130531       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0520 10:46:48.130561       1 main.go:250] Node ha-570850-m03 has CIDR [10.244.2.0/24] 
	I0520 10:46:48.130617       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0520 10:46:48.130691       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab] <==
	W0520 10:38:51.432066       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.152]
	I0520 10:38:51.433008       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 10:38:51.439563       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 10:38:51.655574       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 10:38:52.713561       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 10:38:52.731317       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 10:38:52.892821       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 10:39:05.711017       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0520 10:39:05.810742       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0520 10:42:33.706486       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46536: use of closed network connection
	E0520 10:42:33.884526       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46554: use of closed network connection
	E0520 10:42:34.065757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46574: use of closed network connection
	E0520 10:42:34.274096       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46582: use of closed network connection
	E0520 10:42:34.446296       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46606: use of closed network connection
	E0520 10:42:34.637920       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46614: use of closed network connection
	E0520 10:42:34.841021       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46626: use of closed network connection
	E0520 10:42:35.030344       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46638: use of closed network connection
	E0520 10:42:35.210788       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46654: use of closed network connection
	E0520 10:42:35.492296       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46680: use of closed network connection
	E0520 10:42:35.660249       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46704: use of closed network connection
	E0520 10:42:35.842898       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46730: use of closed network connection
	E0520 10:42:36.014217       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46746: use of closed network connection
	E0520 10:42:36.196515       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46774: use of closed network connection
	E0520 10:42:36.369345       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46796: use of closed network connection
	W0520 10:43:51.442538       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.152 192.168.39.240]
	
	
	==> kube-controller-manager [43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68] <==
	I0520 10:42:28.089329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.918µs"
	I0520 10:42:28.192949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.939µs"
	I0520 10:42:29.201586       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.828µs"
	I0520 10:42:29.220422       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.313µs"
	I0520 10:42:29.225611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.858µs"
	I0520 10:42:29.242441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.319µs"
	I0520 10:42:29.249621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.523µs"
	I0520 10:42:29.258271       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.953µs"
	I0520 10:42:29.554527       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.429µs"
	I0520 10:42:29.577101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.905µs"
	I0520 10:42:31.038899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.294168ms"
	I0520 10:42:31.039181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.495µs"
	I0520 10:42:32.816575       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.341211ms"
	I0520 10:42:32.816951       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.198µs"
	I0520 10:42:33.228261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.864906ms"
	I0520 10:42:33.228553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.234µs"
	E0520 10:43:08.861852       1 certificate_controller.go:146] Sync csr-qzlrc failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-qzlrc": the object has been modified; please apply your changes to the latest version and try again
	E0520 10:43:08.873271       1 certificate_controller.go:146] Sync csr-qzlrc failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-qzlrc": the object has been modified; please apply your changes to the latest version and try again
	I0520 10:43:08.962752       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-570850-m04\" does not exist"
	I0520 10:43:08.976348       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-570850-m04" podCIDRs=["10.244.3.0/24"]
	I0520 10:43:10.170326       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-570850-m04"
	I0520 10:43:19.317412       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-570850-m04"
	I0520 10:44:15.211531       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-570850-m04"
	I0520 10:44:15.376965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.38039ms"
	I0520 10:44:15.377067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.13µs"
	
	
	==> kube-proxy [2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0] <==
	I0520 10:39:06.784564       1 server_linux.go:69] "Using iptables proxy"
	I0520 10:39:06.815937       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.152"]
	I0520 10:39:06.917409       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 10:39:06.917466       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 10:39:06.917483       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:39:06.922471       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:39:06.922869       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:39:06.922935       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:39:06.924245       1 config.go:192] "Starting service config controller"
	I0520 10:39:06.924292       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:39:06.924321       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:39:06.924324       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:39:06.925272       1 config.go:319] "Starting node config controller"
	I0520 10:39:06.925313       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:39:07.024717       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 10:39:07.024779       1 shared_informer.go:320] Caches are synced for service config
	I0520 10:39:07.025449       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6] <==
	E0520 10:38:50.885186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0520 10:38:52.984939       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 10:42:06.982847       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7q2gn\": pod kube-proxy-7q2gn is already assigned to node \"ha-570850-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7q2gn" node="ha-570850-m03"
	E0520 10:42:06.983037       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f5355064-ed25-4347-963d-911b76bd30aa(kube-system/kube-proxy-7q2gn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-7q2gn"
	E0520 10:42:06.983065       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7q2gn\": pod kube-proxy-7q2gn is already assigned to node \"ha-570850-m03\"" pod="kube-system/kube-proxy-7q2gn"
	I0520 10:42:06.983108       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7q2gn" node="ha-570850-m03"
	E0520 10:43:09.036606       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xt8ht\": pod kube-proxy-xt8ht is already assigned to node \"ha-570850-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xt8ht" node="ha-570850-m04"
	E0520 10:43:09.036861       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xt8ht\": pod kube-proxy-xt8ht is already assigned to node \"ha-570850-m04\"" pod="kube-system/kube-proxy-xt8ht"
	I0520 10:43:09.036997       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xt8ht" node="ha-570850-m04"
	E0520 10:43:09.037530       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-p9s2j\": pod kindnet-p9s2j is already assigned to node \"ha-570850-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-p9s2j" node="ha-570850-m04"
	E0520 10:43:09.037595       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1fe31478-4f62-41d2-b54c-f6c77522f50b(kube-system/kindnet-p9s2j) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-p9s2j"
	E0520 10:43:09.037619       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-p9s2j\": pod kindnet-p9s2j is already assigned to node \"ha-570850-m04\"" pod="kube-system/kindnet-p9s2j"
	I0520 10:43:09.042099       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-p9s2j" node="ha-570850-m04"
	E0520 10:43:09.115820       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zm8dl\": pod kube-proxy-zm8dl is already assigned to node \"ha-570850-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zm8dl" node="ha-570850-m04"
	E0520 10:43:09.115912       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f29aece2-7321-46cf-b65e-845a3194350a(kube-system/kube-proxy-zm8dl) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zm8dl"
	E0520 10:43:09.115934       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zm8dl\": pod kube-proxy-zm8dl is already assigned to node \"ha-570850-m04\"" pod="kube-system/kube-proxy-zm8dl"
	I0520 10:43:09.115965       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zm8dl" node="ha-570850-m04"
	E0520 10:43:09.240073       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-88gxd\": pod kindnet-88gxd is already assigned to node \"ha-570850-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-88gxd" node="ha-570850-m04"
	E0520 10:43:09.240374       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b586d037-5383-4b68-890e-d703bbd704ee(kube-system/kindnet-88gxd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-88gxd"
	E0520 10:43:09.240455       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-88gxd\": pod kindnet-88gxd is already assigned to node \"ha-570850-m04\"" pod="kube-system/kindnet-88gxd"
	I0520 10:43:09.240546       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-88gxd" node="ha-570850-m04"
	E0520 10:43:09.240746       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-kt46j\": pod kube-proxy-kt46j is already assigned to node \"ha-570850-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-kt46j" node="ha-570850-m04"
	E0520 10:43:09.243239       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4188c495-b918-43ea-9526-73e8061e8f78(kube-system/kube-proxy-kt46j) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-kt46j"
	E0520 10:43:09.243341       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kt46j\": pod kube-proxy-kt46j is already assigned to node \"ha-570850-m04\"" pod="kube-system/kube-proxy-kt46j"
	I0520 10:43:09.243593       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kt46j" node="ha-570850-m04"
	
	
	==> kubelet <==
	May 20 10:42:52 ha-570850 kubelet[1371]: E0520 10:42:52.853616    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:42:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:42:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:42:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:42:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:43:52 ha-570850 kubelet[1371]: E0520 10:43:52.872500    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:43:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:43:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:43:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:43:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:44:52 ha-570850 kubelet[1371]: E0520 10:44:52.860965    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:44:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:44:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:44:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:44:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:45:52 ha-570850 kubelet[1371]: E0520 10:45:52.854700    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:45:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:45:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:45:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:45:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:46:52 ha-570850 kubelet[1371]: E0520 10:46:52.855955    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:46:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:46:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:46:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:46:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-570850 -n ha-570850
helpers_test.go:261: (dbg) Run:  kubectl --context ha-570850 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (53.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (396.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-570850 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-570850 -v=7 --alsologtostderr
E0520 10:47:19.191639   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:47:46.875447   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-570850 -v=7 --alsologtostderr: exit status 82 (2m1.958702505s)

                                                
                                                
-- stdout --
	* Stopping node "ha-570850-m04"  ...
	* Stopping node "ha-570850-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:46:58.851416   28070 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:46:58.851633   28070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:58.851642   28070 out.go:304] Setting ErrFile to fd 2...
	I0520 10:46:58.851646   28070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:46:58.851851   28070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:46:58.852093   28070 out.go:298] Setting JSON to false
	I0520 10:46:58.852175   28070 mustload.go:65] Loading cluster: ha-570850
	I0520 10:46:58.852492   28070 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:46:58.852570   28070 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:46:58.852741   28070 mustload.go:65] Loading cluster: ha-570850
	I0520 10:46:58.852864   28070 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:46:58.852891   28070 stop.go:39] StopHost: ha-570850-m04
	I0520 10:46:58.853270   28070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:46:58.853321   28070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:46:58.867834   28070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0520 10:46:58.868324   28070 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:46:58.868969   28070 main.go:141] libmachine: Using API Version  1
	I0520 10:46:58.868989   28070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:46:58.869376   28070 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:46:58.871970   28070 out.go:177] * Stopping node "ha-570850-m04"  ...
	I0520 10:46:58.873391   28070 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 10:46:58.873421   28070 main.go:141] libmachine: (ha-570850-m04) Calling .DriverName
	I0520 10:46:58.873648   28070 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 10:46:58.873677   28070 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHHostname
	I0520 10:46:58.876595   28070 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:58.877101   28070 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:42:51 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:46:58.877131   28070 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:46:58.877295   28070 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHPort
	I0520 10:46:58.877463   28070 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHKeyPath
	I0520 10:46:58.877596   28070 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHUsername
	I0520 10:46:58.877750   28070 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m04/id_rsa Username:docker}
	I0520 10:46:58.962299   28070 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 10:46:59.015596   28070 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 10:46:59.069596   28070 main.go:141] libmachine: Stopping "ha-570850-m04"...
	I0520 10:46:59.069637   28070 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:46:59.071227   28070 main.go:141] libmachine: (ha-570850-m04) Calling .Stop
	I0520 10:46:59.074765   28070 main.go:141] libmachine: (ha-570850-m04) Waiting for machine to stop 0/120
	I0520 10:47:00.353864   28070 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:47:00.355173   28070 main.go:141] libmachine: Machine "ha-570850-m04" was stopped.
	I0520 10:47:00.355189   28070 stop.go:75] duration metric: took 1.481799613s to stop
	I0520 10:47:00.355208   28070 stop.go:39] StopHost: ha-570850-m03
	I0520 10:47:00.355496   28070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:47:00.355540   28070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:47:00.369899   28070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33113
	I0520 10:47:00.370287   28070 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:47:00.370674   28070 main.go:141] libmachine: Using API Version  1
	I0520 10:47:00.370691   28070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:47:00.371082   28070 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:47:00.373105   28070 out.go:177] * Stopping node "ha-570850-m03"  ...
	I0520 10:47:00.374592   28070 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 10:47:00.374619   28070 main.go:141] libmachine: (ha-570850-m03) Calling .DriverName
	I0520 10:47:00.374812   28070 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 10:47:00.374832   28070 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHHostname
	I0520 10:47:00.377298   28070 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:47:00.377763   28070 main.go:141] libmachine: (ha-570850-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:76:2b", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:41:34 +0000 UTC Type:0 Mac:52:54:00:7b:76:2b Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-570850-m03 Clientid:01:52:54:00:7b:76:2b}
	I0520 10:47:00.377796   28070 main.go:141] libmachine: (ha-570850-m03) DBG | domain ha-570850-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:7b:76:2b in network mk-ha-570850
	I0520 10:47:00.377855   28070 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHPort
	I0520 10:47:00.378013   28070 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHKeyPath
	I0520 10:47:00.378178   28070 main.go:141] libmachine: (ha-570850-m03) Calling .GetSSHUsername
	I0520 10:47:00.378322   28070 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m03/id_rsa Username:docker}
	I0520 10:47:00.464919   28070 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 10:47:00.519645   28070 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 10:47:00.573681   28070 main.go:141] libmachine: Stopping "ha-570850-m03"...
	I0520 10:47:00.573708   28070 main.go:141] libmachine: (ha-570850-m03) Calling .GetState
	I0520 10:47:00.575185   28070 main.go:141] libmachine: (ha-570850-m03) Calling .Stop
	I0520 10:47:00.578709   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 0/120
	I0520 10:47:01.580235   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 1/120
	I0520 10:47:02.581699   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 2/120
	I0520 10:47:03.583058   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 3/120
	I0520 10:47:04.584279   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 4/120
	I0520 10:47:05.586082   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 5/120
	I0520 10:47:06.587547   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 6/120
	I0520 10:47:07.588972   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 7/120
	I0520 10:47:08.590434   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 8/120
	I0520 10:47:09.591837   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 9/120
	I0520 10:47:10.593648   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 10/120
	I0520 10:47:11.595082   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 11/120
	I0520 10:47:12.596493   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 12/120
	I0520 10:47:13.597872   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 13/120
	I0520 10:47:14.599198   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 14/120
	I0520 10:47:15.601579   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 15/120
	I0520 10:47:16.603110   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 16/120
	I0520 10:47:17.604489   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 17/120
	I0520 10:47:18.605754   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 18/120
	I0520 10:47:19.607145   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 19/120
	I0520 10:47:20.608898   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 20/120
	I0520 10:47:21.610561   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 21/120
	I0520 10:47:22.611878   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 22/120
	I0520 10:47:23.614058   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 23/120
	I0520 10:47:24.615315   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 24/120
	I0520 10:47:25.617198   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 25/120
	I0520 10:47:26.618500   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 26/120
	I0520 10:47:27.620025   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 27/120
	I0520 10:47:28.621718   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 28/120
	I0520 10:47:29.623388   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 29/120
	I0520 10:47:30.624909   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 30/120
	I0520 10:47:31.626657   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 31/120
	I0520 10:47:32.628144   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 32/120
	I0520 10:47:33.629522   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 33/120
	I0520 10:47:34.630892   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 34/120
	I0520 10:47:35.633166   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 35/120
	I0520 10:47:36.634923   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 36/120
	I0520 10:47:37.636232   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 37/120
	I0520 10:47:38.638033   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 38/120
	I0520 10:47:39.639420   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 39/120
	I0520 10:47:40.640832   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 40/120
	I0520 10:47:41.641961   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 41/120
	I0520 10:47:42.643341   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 42/120
	I0520 10:47:43.644769   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 43/120
	I0520 10:47:44.646014   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 44/120
	I0520 10:47:45.647577   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 45/120
	I0520 10:47:46.648948   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 46/120
	I0520 10:47:47.650937   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 47/120
	I0520 10:47:48.652115   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 48/120
	I0520 10:47:49.653985   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 49/120
	I0520 10:47:50.655486   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 50/120
	I0520 10:47:51.656740   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 51/120
	I0520 10:47:52.658575   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 52/120
	I0520 10:47:53.659855   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 53/120
	I0520 10:47:54.661202   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 54/120
	I0520 10:47:55.663070   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 55/120
	I0520 10:47:56.664463   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 56/120
	I0520 10:47:57.665731   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 57/120
	I0520 10:47:58.667229   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 58/120
	I0520 10:47:59.668608   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 59/120
	I0520 10:48:00.670350   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 60/120
	I0520 10:48:01.671789   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 61/120
	I0520 10:48:02.673137   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 62/120
	I0520 10:48:03.675467   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 63/120
	I0520 10:48:04.676647   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 64/120
	I0520 10:48:05.678119   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 65/120
	I0520 10:48:06.679909   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 66/120
	I0520 10:48:07.681242   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 67/120
	I0520 10:48:08.682570   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 68/120
	I0520 10:48:09.684017   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 69/120
	I0520 10:48:10.685788   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 70/120
	I0520 10:48:11.687272   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 71/120
	I0520 10:48:12.688958   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 72/120
	I0520 10:48:13.690098   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 73/120
	I0520 10:48:14.691439   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 74/120
	I0520 10:48:15.693823   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 75/120
	I0520 10:48:16.695078   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 76/120
	I0520 10:48:17.697256   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 77/120
	I0520 10:48:18.699268   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 78/120
	I0520 10:48:19.700408   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 79/120
	I0520 10:48:20.701939   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 80/120
	I0520 10:48:21.703100   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 81/120
	I0520 10:48:22.704408   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 82/120
	I0520 10:48:23.706421   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 83/120
	I0520 10:48:24.707724   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 84/120
	I0520 10:48:25.709426   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 85/120
	I0520 10:48:26.710690   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 86/120
	I0520 10:48:27.712295   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 87/120
	I0520 10:48:28.713885   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 88/120
	I0520 10:48:29.715257   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 89/120
	I0520 10:48:30.716963   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 90/120
	I0520 10:48:31.718242   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 91/120
	I0520 10:48:32.719503   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 92/120
	I0520 10:48:33.720860   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 93/120
	I0520 10:48:34.722235   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 94/120
	I0520 10:48:35.723610   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 95/120
	I0520 10:48:36.725172   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 96/120
	I0520 10:48:37.726497   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 97/120
	I0520 10:48:38.727712   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 98/120
	I0520 10:48:39.728955   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 99/120
	I0520 10:48:40.730650   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 100/120
	I0520 10:48:41.731950   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 101/120
	I0520 10:48:42.733449   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 102/120
	I0520 10:48:43.734853   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 103/120
	I0520 10:48:44.736170   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 104/120
	I0520 10:48:45.738098   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 105/120
	I0520 10:48:46.739697   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 106/120
	I0520 10:48:47.741280   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 107/120
	I0520 10:48:48.742507   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 108/120
	I0520 10:48:49.743833   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 109/120
	I0520 10:48:50.745221   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 110/120
	I0520 10:48:51.747384   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 111/120
	I0520 10:48:52.749421   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 112/120
	I0520 10:48:53.750804   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 113/120
	I0520 10:48:54.752189   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 114/120
	I0520 10:48:55.754389   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 115/120
	I0520 10:48:56.755807   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 116/120
	I0520 10:48:57.757365   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 117/120
	I0520 10:48:58.758710   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 118/120
	I0520 10:48:59.760095   28070 main.go:141] libmachine: (ha-570850-m03) Waiting for machine to stop 119/120
	I0520 10:49:00.760844   28070 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0520 10:49:00.760891   28070 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0520 10:49:00.763181   28070 out.go:177] 
	W0520 10:49:00.764496   28070 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0520 10:49:00.764514   28070 out.go:239] * 
	* 
	W0520 10:49:00.766809   28070 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 10:49:00.768320   28070 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-570850 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-570850 --wait=true -v=7 --alsologtostderr
E0520 10:49:54.310705   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:51:17.357056   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:52:19.191545   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-570850 --wait=true -v=7 --alsologtostderr: exit status 80 (4m32.449164517s)

                                                
                                                
-- stdout --
	* [ha-570850] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-570850" primary control-plane node in "ha-570850" cluster
	* Updating the running kvm2 "ha-570850" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-570850-m02" control-plane node in "ha-570850" cluster
	* Restarting existing kvm2 VM for "ha-570850-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.152
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.152
	* Verifying Kubernetes components...
	
	* Starting "ha-570850-m03" control-plane node in "ha-570850" cluster
	* Restarting existing kvm2 VM for "ha-570850-m03" ...
	* Found network options:
	  - NO_PROXY=192.168.39.152,192.168.39.111
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.152
	  - env NO_PROXY=192.168.39.152,192.168.39.111
	* Verifying Kubernetes components...
	
	* Starting "ha-570850-m04" worker node in "ha-570850" cluster
	* Restarting existing kvm2 VM for "ha-570850-m04" ...
	* Updating the running kvm2 "ha-570850-m04" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:49:00.810311   28545 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:49:00.810557   28545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:49:00.810568   28545 out.go:304] Setting ErrFile to fd 2...
	I0520 10:49:00.810574   28545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:49:00.810762   28545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:49:00.811306   28545 out.go:298] Setting JSON to false
	I0520 10:49:00.812227   28545 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1884,"bootTime":1716200257,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 10:49:00.812278   28545 start.go:139] virtualization: kvm guest
	I0520 10:49:00.814785   28545 out.go:177] * [ha-570850] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 10:49:00.816402   28545 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:49:00.816404   28545 notify.go:220] Checking for updates...
	I0520 10:49:00.817786   28545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:49:00.819143   28545 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:49:00.820484   28545 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:49:00.821703   28545 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 10:49:00.822978   28545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:49:00.824648   28545 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:49:00.824745   28545 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:49:00.825258   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:49:00.825308   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:49:00.841522   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I0520 10:49:00.841935   28545 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:49:00.842526   28545 main.go:141] libmachine: Using API Version  1
	I0520 10:49:00.842555   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:49:00.842946   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:49:00.843128   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:49:00.877361   28545 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 10:49:00.878748   28545 start.go:297] selected driver: kvm2
	I0520 10:49:00.878771   28545 start.go:901] validating driver "kvm2" against &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:49:00.878981   28545 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:49:00.879321   28545 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:49:00.879400   28545 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 10:49:00.893725   28545 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 10:49:00.894355   28545 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:49:00.894384   28545 cni.go:84] Creating CNI manager for ""
	I0520 10:49:00.894392   28545 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 10:49:00.894441   28545 start.go:340] cluster config:
	{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:49:00.894551   28545 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:49:00.896612   28545 out.go:177] * Starting "ha-570850" primary control-plane node in "ha-570850" cluster
	I0520 10:49:00.897801   28545 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:49:00.897833   28545 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 10:49:00.897846   28545 cache.go:56] Caching tarball of preloaded images
	I0520 10:49:00.897966   28545 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 10:49:00.897980   28545 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:49:00.898144   28545 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:49:00.898400   28545 start.go:360] acquireMachinesLock for ha-570850: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 10:49:00.898447   28545 start.go:364] duration metric: took 28.765µs to acquireMachinesLock for "ha-570850"
	I0520 10:49:00.898467   28545 start.go:96] Skipping create...Using existing machine configuration
	I0520 10:49:00.898476   28545 fix.go:54] fixHost starting: 
	I0520 10:49:00.898758   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:49:00.898808   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:49:00.912154   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0520 10:49:00.912545   28545 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:49:00.913059   28545 main.go:141] libmachine: Using API Version  1
	I0520 10:49:00.913080   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:49:00.913373   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:49:00.913661   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:49:00.913859   28545 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:49:00.915287   28545 fix.go:112] recreateIfNeeded on ha-570850: state=Running err=<nil>
	W0520 10:49:00.915306   28545 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 10:49:00.917201   28545 out.go:177] * Updating the running kvm2 "ha-570850" VM ...
	I0520 10:49:00.918411   28545 machine.go:94] provisionDockerMachine start ...
	I0520 10:49:00.918427   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:49:00.918614   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:00.921039   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:00.921498   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:00.921531   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:00.921695   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:00.921876   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:00.922030   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:00.922197   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:00.922398   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:00.922593   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:00.922606   28545 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 10:49:01.030186   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850
	
	I0520 10:49:01.030212   28545 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:49:01.030430   28545 buildroot.go:166] provisioning hostname "ha-570850"
	I0520 10:49:01.030458   28545 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:49:01.030664   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.033274   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.033744   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.033768   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.033918   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.034109   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.034287   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.034442   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.034599   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:01.034836   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:01.034853   28545 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-570850 && echo "ha-570850" | sudo tee /etc/hostname
	I0520 10:49:01.156409   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850
	
	I0520 10:49:01.156435   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.159025   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.159417   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.159442   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.159603   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.159765   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.159884   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.159977   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.160116   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:01.160263   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:01.160277   28545 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-570850' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-570850/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-570850' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:49:01.267106   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:49:01.267139   28545 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 10:49:01.267167   28545 buildroot.go:174] setting up certificates
	I0520 10:49:01.267174   28545 provision.go:84] configureAuth start
	I0520 10:49:01.267182   28545 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:49:01.267438   28545 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:49:01.270043   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.270426   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.270462   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.270636   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.272701   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.273087   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.273114   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.273209   28545 provision.go:143] copyHostCerts
	I0520 10:49:01.273239   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:49:01.273270   28545 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 10:49:01.273284   28545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:49:01.273361   28545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 10:49:01.273463   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:49:01.273485   28545 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 10:49:01.273492   28545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:49:01.273528   28545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 10:49:01.273608   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:49:01.273631   28545 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 10:49:01.273640   28545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:49:01.273674   28545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 10:49:01.273756   28545 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.ha-570850 san=[127.0.0.1 192.168.39.152 ha-570850 localhost minikube]
	I0520 10:49:01.408798   28545 provision.go:177] copyRemoteCerts
	I0520 10:49:01.408849   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:49:01.408885   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.411703   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.412078   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.412113   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.412257   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.412414   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.412576   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.412668   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:49:01.496371   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 10:49:01.496434   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 10:49:01.524309   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 10:49:01.524367   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:49:01.550409   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 10:49:01.550477   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0520 10:49:01.574937   28545 provision.go:87] duration metric: took 307.75201ms to configureAuth
	I0520 10:49:01.574962   28545 buildroot.go:189] setting minikube options for container-runtime
	I0520 10:49:01.575171   28545 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:49:01.575233   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.577826   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.578239   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.578268   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.578468   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.578697   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.578855   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.579015   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.579170   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:01.579333   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:01.579356   28545 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:50:32.380185   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:50:32.380214   28545 machine.go:97] duration metric: took 1m31.461789949s to provisionDockerMachine
	I0520 10:50:32.380235   28545 start.go:293] postStartSetup for "ha-570850" (driver="kvm2")
	I0520 10:50:32.380249   28545 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:50:32.380272   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.380603   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:50:32.380647   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.383513   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.383918   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.383943   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.384065   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.384230   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.384381   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.384523   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:50:32.468651   28545 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:50:32.473075   28545 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 10:50:32.473099   28545 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 10:50:32.473170   28545 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 10:50:32.473245   28545 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 10:50:32.473254   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /etc/ssl/certs/111622.pem
	I0520 10:50:32.473344   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 10:50:32.483198   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:50:32.507373   28545 start.go:296] duration metric: took 127.126357ms for postStartSetup
	I0520 10:50:32.507413   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.507673   28545 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0520 10:50:32.507693   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.510108   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.510473   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.510497   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.510642   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.510839   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.510983   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.511169   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	W0520 10:50:32.595598   28545 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0520 10:50:32.595620   28545 fix.go:56] duration metric: took 1m31.697144912s for fixHost
	I0520 10:50:32.595647   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.598084   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.598387   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.598424   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.598549   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.598816   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.599002   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.599162   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.599307   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:50:32.599465   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:50:32.599475   28545 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 10:50:32.706542   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716202232.681128286
	
	I0520 10:50:32.706566   28545 fix.go:216] guest clock: 1716202232.681128286
	I0520 10:50:32.706575   28545 fix.go:229] Guest: 2024-05-20 10:50:32.681128286 +0000 UTC Remote: 2024-05-20 10:50:32.595625893 +0000 UTC m=+91.817262454 (delta=85.502393ms)
	I0520 10:50:32.706607   28545 fix.go:200] guest clock delta is within tolerance: 85.502393ms
	I0520 10:50:32.706616   28545 start.go:83] releasing machines lock for "ha-570850", held for 1m31.808156278s
	I0520 10:50:32.706633   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.706850   28545 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:50:32.709202   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.709544   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.709569   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.709734   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.710174   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.710331   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.710424   28545 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:50:32.710473   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.710479   28545 ssh_runner.go:195] Run: cat /version.json
	I0520 10:50:32.710495   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.712748   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713063   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713193   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.713216   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713322   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.713480   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.713491   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.713507   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713666   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.713676   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.713846   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.713854   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:50:32.713963   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.714072   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	W0520 10:50:32.810770   28545 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 10:50:32.810863   28545 ssh_runner.go:195] Run: systemctl --version
	I0520 10:50:32.817315   28545 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:50:32.989007   28545 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 10:50:32.995244   28545 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 10:50:32.995308   28545 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:50:33.005340   28545 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 10:50:33.005361   28545 start.go:494] detecting cgroup driver to use...
	I0520 10:50:33.005418   28545 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:50:33.023319   28545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:50:33.037817   28545 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:50:33.037877   28545 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:50:33.052368   28545 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:50:33.066569   28545 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:50:33.215546   28545 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:50:33.358676   28545 docker.go:233] disabling docker service ...
	I0520 10:50:33.358732   28545 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:50:33.375689   28545 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:50:33.389459   28545 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:50:33.538495   28545 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:50:33.698476   28545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:50:33.715452   28545 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:50:33.734625   28545 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:50:33.734681   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.745464   28545 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:50:33.745532   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.756460   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.767137   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.777739   28545 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:50:33.788287   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.798513   28545 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.809204   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.819733   28545 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:50:33.829957   28545 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:50:33.839650   28545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:50:33.982271   28545 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:50:34.731686   28545 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:50:34.731752   28545 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:50:34.736749   28545 start.go:562] Will wait 60s for crictl version
	I0520 10:50:34.736814   28545 ssh_runner.go:195] Run: which crictl
	I0520 10:50:34.740764   28545 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:50:34.788346   28545 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 10:50:34.788434   28545 ssh_runner.go:195] Run: crio --version
	I0520 10:50:34.817827   28545 ssh_runner.go:195] Run: crio --version
	I0520 10:50:34.851583   28545 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 10:50:34.852979   28545 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:50:34.855568   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:34.855886   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:34.855910   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:34.856082   28545 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 10:50:34.860885   28545 kubeadm.go:877] updating cluster {Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 10:50:34.861058   28545 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:50:34.861115   28545 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:50:34.904271   28545 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:50:34.904291   28545 crio.go:433] Images already preloaded, skipping extraction
	I0520 10:50:34.904333   28545 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:50:34.939632   28545 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:50:34.939656   28545 cache_images.go:84] Images are preloaded, skipping loading
	I0520 10:50:34.939667   28545 kubeadm.go:928] updating node { 192.168.39.152 8443 v1.30.1 crio true true} ...
	I0520 10:50:34.939770   28545 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-570850 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:50:34.939868   28545 ssh_runner.go:195] Run: crio config
	I0520 10:50:34.996367   28545 cni.go:84] Creating CNI manager for ""
	I0520 10:50:34.996383   28545 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 10:50:34.996396   28545 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 10:50:34.996422   28545 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-570850 NodeName:ha-570850 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 10:50:34.996550   28545 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-570850"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 10:50:34.996568   28545 kube-vip.go:115] generating kube-vip config ...
	I0520 10:50:34.996616   28545 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 10:50:35.009156   28545 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 10:50:35.009260   28545 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 10:50:35.009315   28545 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:50:35.019441   28545 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 10:50:35.019484   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 10:50:35.029250   28545 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0520 10:50:35.045718   28545 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:50:35.062017   28545 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0520 10:50:35.078117   28545 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 10:50:35.094874   28545 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 10:50:35.099790   28545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:50:35.242816   28545 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:50:35.258353   28545 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850 for IP: 192.168.39.152
	I0520 10:50:35.258374   28545 certs.go:194] generating shared ca certs ...
	I0520 10:50:35.258393   28545 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:50:35.258537   28545 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 10:50:35.258592   28545 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 10:50:35.258615   28545 certs.go:256] generating profile certs ...
	I0520 10:50:35.258705   28545 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key
	I0520 10:50:35.258743   28545 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f
	I0520 10:50:35.258789   28545 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.111 192.168.39.240 192.168.39.254]
	I0520 10:50:35.323349   28545 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f ...
	I0520 10:50:35.323379   28545 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f: {Name:mk80a84971cac50baec0222bf4535f70c6381cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:50:35.323558   28545 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f ...
	I0520 10:50:35.323576   28545 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f: {Name:mk3eecf14b574a3dc74183c4476fb213e98e80f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:50:35.323674   28545 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt
	I0520 10:50:35.323862   28545 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key
	I0520 10:50:35.324034   28545 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key
	I0520 10:50:35.324052   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 10:50:35.324068   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 10:50:35.324091   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 10:50:35.324110   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 10:50:35.324129   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 10:50:35.324149   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 10:50:35.324168   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 10:50:35.324186   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 10:50:35.324246   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 10:50:35.324285   28545 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 10:50:35.324301   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 10:50:35.324334   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:50:35.324365   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:50:35.324401   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 10:50:35.324453   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:50:35.324492   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem -> /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.324512   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.324531   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.325124   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:50:35.350917   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:50:35.375112   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:50:35.398626   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 10:50:35.422688   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 10:50:35.447172   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 10:50:35.470738   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:50:35.553009   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:50:35.610723   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 10:50:35.648271   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 10:50:35.677258   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:50:35.710822   28545 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 10:50:35.733493   28545 ssh_runner.go:195] Run: openssl version
	I0520 10:50:35.740566   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:50:35.753475   28545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.759848   28545 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.759889   28545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.765875   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:50:35.782991   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 10:50:35.798421   28545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.805287   28545 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.805341   28545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.819309   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 10:50:35.829389   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 10:50:35.840391   28545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.844907   28545 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.844994   28545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.850890   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 10:50:35.860742   28545 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:50:35.865350   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 10:50:35.871853   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 10:50:35.878065   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 10:50:35.883542   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 10:50:35.889358   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 10:50:35.895053   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 10:50:35.900431   28545 kubeadm.go:391] StartCluster: {Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:50:35.900524   28545 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 10:50:35.900574   28545 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 10:50:35.939457   28545 cri.go:89] found id: "0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52"
	I0520 10:50:35.939478   28545 cri.go:89] found id: "1554ca782d2e3f7c42b06a3c0e74531de3d26d0ca5fe60fa686a7b6afa191326"
	I0520 10:50:35.939484   28545 cri.go:89] found id: "e07aed17013f0a9702039b0e913a8f3ee47ae3a2c5783ffb5c6d19619cbaf058"
	I0520 10:50:35.939489   28545 cri.go:89] found id: "53da5ef1aad0da9abe35a5f2db0970025b8855cb63c2361b1c617a487c1b0156"
	I0520 10:50:35.939493   28545 cri.go:89] found id: "a996ad2d62ca76ff1ec8d79e5907a3dccdc7fcf6178ff99795bd7bec168652ed"
	I0520 10:50:35.939497   28545 cri.go:89] found id: "5dfd2933bae47652d85545b3c50f13118d2b9b80e0d182e41b38e5e1fab415ee"
	I0520 10:50:35.939501   28545 cri.go:89] found id: "90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d"
	I0520 10:50:35.939504   28545 cri.go:89] found id: "db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c"
	I0520 10:50:35.939507   28545 cri.go:89] found id: "501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141"
	I0520 10:50:35.939513   28545 cri.go:89] found id: "2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0"
	I0520 10:50:35.939517   28545 cri.go:89] found id: "5497e7ab0c98629fad1acf76af134ab8564cfc623f5403e153cfa0fb0f8a2853"
	I0520 10:50:35.939520   28545 cri.go:89] found id: "3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d"
	I0520 10:50:35.939524   28545 cri.go:89] found id: "5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6"
	I0520 10:50:35.939528   28545 cri.go:89] found id: "43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68"
	I0520 10:50:35.939536   28545 cri.go:89] found id: "25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab"
	I0520 10:50:35.939542   28545 cri.go:89] found id: ""
	I0520 10:50:35.939589   28545 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-570850 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-570850
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-570850 -n ha-570850
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-570850 logs -n 25: (1.817490752s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m02:/home/docker/cp-test_ha-570850-m03_ha-570850-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m02 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m03_ha-570850-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04:/home/docker/cp-test_ha-570850-m03_ha-570850-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m04 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m03_ha-570850-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp testdata/cp-test.txt                                                | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2002648180/001/cp-test_ha-570850-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850:/home/docker/cp-test_ha-570850-m04_ha-570850.txt                       |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850 sudo cat                                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850.txt                                 |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m02:/home/docker/cp-test_ha-570850-m04_ha-570850-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m02 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03:/home/docker/cp-test_ha-570850-m04_ha-570850-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m03 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-570850 node stop m02 -v=7                                                     | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-570850 node start m02 -v=7                                                    | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-570850 -v=7                                                           | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-570850 -v=7                                                                | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-570850 --wait=true -v=7                                                    | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-570850                                                                | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:53 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:49:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:49:00.810311   28545 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:49:00.810557   28545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:49:00.810568   28545 out.go:304] Setting ErrFile to fd 2...
	I0520 10:49:00.810574   28545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:49:00.810762   28545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:49:00.811306   28545 out.go:298] Setting JSON to false
	I0520 10:49:00.812227   28545 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1884,"bootTime":1716200257,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 10:49:00.812278   28545 start.go:139] virtualization: kvm guest
	I0520 10:49:00.814785   28545 out.go:177] * [ha-570850] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 10:49:00.816402   28545 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:49:00.816404   28545 notify.go:220] Checking for updates...
	I0520 10:49:00.817786   28545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:49:00.819143   28545 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:49:00.820484   28545 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:49:00.821703   28545 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 10:49:00.822978   28545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:49:00.824648   28545 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:49:00.824745   28545 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:49:00.825258   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:49:00.825308   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:49:00.841522   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I0520 10:49:00.841935   28545 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:49:00.842526   28545 main.go:141] libmachine: Using API Version  1
	I0520 10:49:00.842555   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:49:00.842946   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:49:00.843128   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:49:00.877361   28545 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 10:49:00.878748   28545 start.go:297] selected driver: kvm2
	I0520 10:49:00.878771   28545 start.go:901] validating driver "kvm2" against &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:49:00.878981   28545 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:49:00.879321   28545 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:49:00.879400   28545 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 10:49:00.893725   28545 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 10:49:00.894355   28545 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:49:00.894384   28545 cni.go:84] Creating CNI manager for ""
	I0520 10:49:00.894392   28545 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 10:49:00.894441   28545 start.go:340] cluster config:
	{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:49:00.894551   28545 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:49:00.896612   28545 out.go:177] * Starting "ha-570850" primary control-plane node in "ha-570850" cluster
	I0520 10:49:00.897801   28545 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:49:00.897833   28545 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 10:49:00.897846   28545 cache.go:56] Caching tarball of preloaded images
	I0520 10:49:00.897966   28545 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 10:49:00.897980   28545 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:49:00.898144   28545 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:49:00.898400   28545 start.go:360] acquireMachinesLock for ha-570850: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 10:49:00.898447   28545 start.go:364] duration metric: took 28.765µs to acquireMachinesLock for "ha-570850"
	I0520 10:49:00.898467   28545 start.go:96] Skipping create...Using existing machine configuration
	I0520 10:49:00.898476   28545 fix.go:54] fixHost starting: 
	I0520 10:49:00.898758   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:49:00.898808   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:49:00.912154   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0520 10:49:00.912545   28545 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:49:00.913059   28545 main.go:141] libmachine: Using API Version  1
	I0520 10:49:00.913080   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:49:00.913373   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:49:00.913661   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:49:00.913859   28545 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:49:00.915287   28545 fix.go:112] recreateIfNeeded on ha-570850: state=Running err=<nil>
	W0520 10:49:00.915306   28545 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 10:49:00.917201   28545 out.go:177] * Updating the running kvm2 "ha-570850" VM ...
	I0520 10:49:00.918411   28545 machine.go:94] provisionDockerMachine start ...
	I0520 10:49:00.918427   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:49:00.918614   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:00.921039   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:00.921498   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:00.921531   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:00.921695   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:00.921876   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:00.922030   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:00.922197   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:00.922398   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:00.922593   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:00.922606   28545 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 10:49:01.030186   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850
	
	I0520 10:49:01.030212   28545 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:49:01.030430   28545 buildroot.go:166] provisioning hostname "ha-570850"
	I0520 10:49:01.030458   28545 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:49:01.030664   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.033274   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.033744   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.033768   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.033918   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.034109   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.034287   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.034442   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.034599   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:01.034836   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:01.034853   28545 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-570850 && echo "ha-570850" | sudo tee /etc/hostname
	I0520 10:49:01.156409   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850
	
	I0520 10:49:01.156435   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.159025   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.159417   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.159442   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.159603   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.159765   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.159884   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.159977   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.160116   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:01.160263   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:01.160277   28545 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-570850' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-570850/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-570850' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:49:01.267106   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:49:01.267139   28545 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 10:49:01.267167   28545 buildroot.go:174] setting up certificates
	I0520 10:49:01.267174   28545 provision.go:84] configureAuth start
	I0520 10:49:01.267182   28545 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:49:01.267438   28545 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:49:01.270043   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.270426   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.270462   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.270636   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.272701   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.273087   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.273114   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.273209   28545 provision.go:143] copyHostCerts
	I0520 10:49:01.273239   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:49:01.273270   28545 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 10:49:01.273284   28545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:49:01.273361   28545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 10:49:01.273463   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:49:01.273485   28545 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 10:49:01.273492   28545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:49:01.273528   28545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 10:49:01.273608   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:49:01.273631   28545 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 10:49:01.273640   28545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:49:01.273674   28545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 10:49:01.273756   28545 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.ha-570850 san=[127.0.0.1 192.168.39.152 ha-570850 localhost minikube]
	I0520 10:49:01.408798   28545 provision.go:177] copyRemoteCerts
	I0520 10:49:01.408849   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:49:01.408885   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.411703   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.412078   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.412113   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.412257   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.412414   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.412576   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.412668   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:49:01.496371   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 10:49:01.496434   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 10:49:01.524309   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 10:49:01.524367   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:49:01.550409   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 10:49:01.550477   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0520 10:49:01.574937   28545 provision.go:87] duration metric: took 307.75201ms to configureAuth
	I0520 10:49:01.574962   28545 buildroot.go:189] setting minikube options for container-runtime
	I0520 10:49:01.575171   28545 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:49:01.575233   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.577826   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.578239   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.578268   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.578468   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.578697   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.578855   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.579015   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.579170   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:01.579333   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:01.579356   28545 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:50:32.380185   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:50:32.380214   28545 machine.go:97] duration metric: took 1m31.461789949s to provisionDockerMachine
	I0520 10:50:32.380235   28545 start.go:293] postStartSetup for "ha-570850" (driver="kvm2")
	I0520 10:50:32.380249   28545 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:50:32.380272   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.380603   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:50:32.380647   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.383513   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.383918   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.383943   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.384065   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.384230   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.384381   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.384523   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:50:32.468651   28545 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:50:32.473075   28545 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 10:50:32.473099   28545 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 10:50:32.473170   28545 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 10:50:32.473245   28545 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 10:50:32.473254   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /etc/ssl/certs/111622.pem
	I0520 10:50:32.473344   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 10:50:32.483198   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:50:32.507373   28545 start.go:296] duration metric: took 127.126357ms for postStartSetup
	I0520 10:50:32.507413   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.507673   28545 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0520 10:50:32.507693   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.510108   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.510473   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.510497   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.510642   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.510839   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.510983   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.511169   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	W0520 10:50:32.595598   28545 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0520 10:50:32.595620   28545 fix.go:56] duration metric: took 1m31.697144912s for fixHost
	I0520 10:50:32.595647   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.598084   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.598387   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.598424   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.598549   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.598816   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.599002   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.599162   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.599307   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:50:32.599465   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:50:32.599475   28545 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 10:50:32.706542   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716202232.681128286
	
	I0520 10:50:32.706566   28545 fix.go:216] guest clock: 1716202232.681128286
	I0520 10:50:32.706575   28545 fix.go:229] Guest: 2024-05-20 10:50:32.681128286 +0000 UTC Remote: 2024-05-20 10:50:32.595625893 +0000 UTC m=+91.817262454 (delta=85.502393ms)
	I0520 10:50:32.706607   28545 fix.go:200] guest clock delta is within tolerance: 85.502393ms
	I0520 10:50:32.706616   28545 start.go:83] releasing machines lock for "ha-570850", held for 1m31.808156278s
	I0520 10:50:32.706633   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.706850   28545 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:50:32.709202   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.709544   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.709569   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.709734   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.710174   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.710331   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.710424   28545 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:50:32.710473   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.710479   28545 ssh_runner.go:195] Run: cat /version.json
	I0520 10:50:32.710495   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.712748   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713063   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713193   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.713216   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713322   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.713480   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.713491   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.713507   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713666   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.713676   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.713846   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.713854   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:50:32.713963   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.714072   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	W0520 10:50:32.810770   28545 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 10:50:32.810863   28545 ssh_runner.go:195] Run: systemctl --version
	I0520 10:50:32.817315   28545 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:50:32.989007   28545 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 10:50:32.995244   28545 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 10:50:32.995308   28545 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:50:33.005340   28545 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 10:50:33.005361   28545 start.go:494] detecting cgroup driver to use...
	I0520 10:50:33.005418   28545 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:50:33.023319   28545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:50:33.037817   28545 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:50:33.037877   28545 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:50:33.052368   28545 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:50:33.066569   28545 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:50:33.215546   28545 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:50:33.358676   28545 docker.go:233] disabling docker service ...
	I0520 10:50:33.358732   28545 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:50:33.375689   28545 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:50:33.389459   28545 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:50:33.538495   28545 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:50:33.698476   28545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:50:33.715452   28545 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:50:33.734625   28545 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:50:33.734681   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.745464   28545 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:50:33.745532   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.756460   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.767137   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.777739   28545 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:50:33.788287   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.798513   28545 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.809204   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.819733   28545 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:50:33.829957   28545 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:50:33.839650   28545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:50:33.982271   28545 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:50:34.731686   28545 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:50:34.731752   28545 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:50:34.736749   28545 start.go:562] Will wait 60s for crictl version
	I0520 10:50:34.736814   28545 ssh_runner.go:195] Run: which crictl
	I0520 10:50:34.740764   28545 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:50:34.788346   28545 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 10:50:34.788434   28545 ssh_runner.go:195] Run: crio --version
	I0520 10:50:34.817827   28545 ssh_runner.go:195] Run: crio --version
	I0520 10:50:34.851583   28545 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 10:50:34.852979   28545 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:50:34.855568   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:34.855886   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:34.855910   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:34.856082   28545 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 10:50:34.860885   28545 kubeadm.go:877] updating cluster {Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 10:50:34.861058   28545 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:50:34.861115   28545 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:50:34.904271   28545 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:50:34.904291   28545 crio.go:433] Images already preloaded, skipping extraction
	I0520 10:50:34.904333   28545 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:50:34.939632   28545 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:50:34.939656   28545 cache_images.go:84] Images are preloaded, skipping loading
	I0520 10:50:34.939667   28545 kubeadm.go:928] updating node { 192.168.39.152 8443 v1.30.1 crio true true} ...
	I0520 10:50:34.939770   28545 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-570850 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:50:34.939868   28545 ssh_runner.go:195] Run: crio config
	I0520 10:50:34.996367   28545 cni.go:84] Creating CNI manager for ""
	I0520 10:50:34.996383   28545 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 10:50:34.996396   28545 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 10:50:34.996422   28545 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-570850 NodeName:ha-570850 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 10:50:34.996550   28545 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-570850"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 10:50:34.996568   28545 kube-vip.go:115] generating kube-vip config ...
	I0520 10:50:34.996616   28545 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 10:50:35.009156   28545 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 10:50:35.009260   28545 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 10:50:35.009315   28545 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:50:35.019441   28545 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 10:50:35.019484   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 10:50:35.029250   28545 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0520 10:50:35.045718   28545 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:50:35.062017   28545 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0520 10:50:35.078117   28545 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 10:50:35.094874   28545 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 10:50:35.099790   28545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:50:35.242816   28545 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:50:35.258353   28545 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850 for IP: 192.168.39.152
	I0520 10:50:35.258374   28545 certs.go:194] generating shared ca certs ...
	I0520 10:50:35.258393   28545 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:50:35.258537   28545 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 10:50:35.258592   28545 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 10:50:35.258615   28545 certs.go:256] generating profile certs ...
	I0520 10:50:35.258705   28545 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key
	I0520 10:50:35.258743   28545 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f
	I0520 10:50:35.258789   28545 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.111 192.168.39.240 192.168.39.254]
	I0520 10:50:35.323349   28545 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f ...
	I0520 10:50:35.323379   28545 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f: {Name:mk80a84971cac50baec0222bf4535f70c6381cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:50:35.323558   28545 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f ...
	I0520 10:50:35.323576   28545 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f: {Name:mk3eecf14b574a3dc74183c4476fb213e98e80f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:50:35.323674   28545 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt
	I0520 10:50:35.323862   28545 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key
	I0520 10:50:35.324034   28545 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key
	I0520 10:50:35.324052   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 10:50:35.324068   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 10:50:35.324091   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 10:50:35.324110   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 10:50:35.324129   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 10:50:35.324149   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 10:50:35.324168   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 10:50:35.324186   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 10:50:35.324246   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 10:50:35.324285   28545 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 10:50:35.324301   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 10:50:35.324334   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:50:35.324365   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:50:35.324401   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 10:50:35.324453   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:50:35.324492   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem -> /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.324512   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.324531   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.325124   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:50:35.350917   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:50:35.375112   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:50:35.398626   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 10:50:35.422688   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 10:50:35.447172   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 10:50:35.470738   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:50:35.553009   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:50:35.610723   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 10:50:35.648271   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 10:50:35.677258   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:50:35.710822   28545 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 10:50:35.733493   28545 ssh_runner.go:195] Run: openssl version
	I0520 10:50:35.740566   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:50:35.753475   28545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.759848   28545 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.759889   28545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.765875   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:50:35.782991   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 10:50:35.798421   28545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.805287   28545 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.805341   28545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.819309   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 10:50:35.829389   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 10:50:35.840391   28545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.844907   28545 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.844994   28545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.850890   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 10:50:35.860742   28545 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:50:35.865350   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 10:50:35.871853   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 10:50:35.878065   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 10:50:35.883542   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 10:50:35.889358   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 10:50:35.895053   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 10:50:35.900431   28545 kubeadm.go:391] StartCluster: {Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:50:35.900524   28545 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 10:50:35.900574   28545 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 10:50:35.939457   28545 cri.go:89] found id: "0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52"
	I0520 10:50:35.939478   28545 cri.go:89] found id: "1554ca782d2e3f7c42b06a3c0e74531de3d26d0ca5fe60fa686a7b6afa191326"
	I0520 10:50:35.939484   28545 cri.go:89] found id: "e07aed17013f0a9702039b0e913a8f3ee47ae3a2c5783ffb5c6d19619cbaf058"
	I0520 10:50:35.939489   28545 cri.go:89] found id: "53da5ef1aad0da9abe35a5f2db0970025b8855cb63c2361b1c617a487c1b0156"
	I0520 10:50:35.939493   28545 cri.go:89] found id: "a996ad2d62ca76ff1ec8d79e5907a3dccdc7fcf6178ff99795bd7bec168652ed"
	I0520 10:50:35.939497   28545 cri.go:89] found id: "5dfd2933bae47652d85545b3c50f13118d2b9b80e0d182e41b38e5e1fab415ee"
	I0520 10:50:35.939501   28545 cri.go:89] found id: "90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d"
	I0520 10:50:35.939504   28545 cri.go:89] found id: "db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c"
	I0520 10:50:35.939507   28545 cri.go:89] found id: "501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141"
	I0520 10:50:35.939513   28545 cri.go:89] found id: "2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0"
	I0520 10:50:35.939517   28545 cri.go:89] found id: "5497e7ab0c98629fad1acf76af134ab8564cfc623f5403e153cfa0fb0f8a2853"
	I0520 10:50:35.939520   28545 cri.go:89] found id: "3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d"
	I0520 10:50:35.939524   28545 cri.go:89] found id: "5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6"
	I0520 10:50:35.939528   28545 cri.go:89] found id: "43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68"
	I0520 10:50:35.939536   28545 cri.go:89] found id: "25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab"
	I0520 10:50:35.939542   28545 cri.go:89] found id: ""
	I0520 10:50:35.939589   28545 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 10:53:33 ha-570850 crio[3788]: time="2024-05-20 10:53:33.923134384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42893f20-05fb-4c1e-b2ad-861f7596b5a1 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:33 ha-570850 crio[3788]: time="2024-05-20 10:53:33.923534801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbd089079a46f63ff820ac71583c01e220f2c5761930e7999ca1b9b68d99553c,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716202329824326685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a408e291c1c58bf1b86d09aeb934f7ffc434a794f200f996ac648a948fb747,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716202289824302946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716202283824088424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1abea911944eec26d1914472f2ae123f18b3bfbfaeb80b6d7349faa0ef068d9,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716202281824156776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716202275829793342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a133506868eb80a3921a605af4867a241ec37844a0d40c6d7929551f6036748,PodSandboxId:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716202274235812612,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970cae7e58e20a58e4a502d5224ff66dffa08257d45b657546df98ba24596cb4,PodSandboxId:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716202255127800238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:af1ea8e41b729b4bfd6dd51080f4c8595b5de450fe04bd402819d41665abb73e,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716202240934212888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2345cda026
db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a,PodSandboxId:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716202240773018227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1428c77c0501a4cf5bf9de480e8d9be63ee566dc02
8a33b588ee385a9969d09,PodSandboxId:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202240804193061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2,PodSandboxId:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716202240754819007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4,PodSandboxId:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716202240711622122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccc0ff58631642554d2c4dda79b9063f12efce048682c58a10f62aca1d1bdd6,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716202240689099592,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716202240592985727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52,PodSandboxId:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202235709951425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\
":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716201751816995551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernet
es.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548128249776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548090990616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716201546520879818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716201526502389608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1716201526517116832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42893f20-05fb-4c1e-b2ad-861f7596b5a1 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:33 ha-570850 crio[3788]: time="2024-05-20 10:53:33.974500936Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=825bca0e-ce20-4ba7-b267-e9c4dd9ec43b name=/runtime.v1.RuntimeService/Version
	May 20 10:53:33 ha-570850 crio[3788]: time="2024-05-20 10:53:33.974597131Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=825bca0e-ce20-4ba7-b267-e9c4dd9ec43b name=/runtime.v1.RuntimeService/Version
	May 20 10:53:33 ha-570850 crio[3788]: time="2024-05-20 10:53:33.976813417Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=911118c2-bf51-4986-a7ef-f47c98ece5c8 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:33 ha-570850 crio[3788]: time="2024-05-20 10:53:33.977206986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202413977187557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=911118c2-bf51-4986-a7ef-f47c98ece5c8 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:33 ha-570850 crio[3788]: time="2024-05-20 10:53:33.978058352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87501057-6848-40b7-ab04-0827fc64ba99 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:33 ha-570850 crio[3788]: time="2024-05-20 10:53:33.978116276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87501057-6848-40b7-ab04-0827fc64ba99 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:33 ha-570850 crio[3788]: time="2024-05-20 10:53:33.979194641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbd089079a46f63ff820ac71583c01e220f2c5761930e7999ca1b9b68d99553c,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716202329824326685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a408e291c1c58bf1b86d09aeb934f7ffc434a794f200f996ac648a948fb747,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716202289824302946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716202283824088424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1abea911944eec26d1914472f2ae123f18b3bfbfaeb80b6d7349faa0ef068d9,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716202281824156776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716202275829793342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a133506868eb80a3921a605af4867a241ec37844a0d40c6d7929551f6036748,PodSandboxId:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716202274235812612,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970cae7e58e20a58e4a502d5224ff66dffa08257d45b657546df98ba24596cb4,PodSandboxId:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716202255127800238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:af1ea8e41b729b4bfd6dd51080f4c8595b5de450fe04bd402819d41665abb73e,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716202240934212888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2345cda026
db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a,PodSandboxId:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716202240773018227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1428c77c0501a4cf5bf9de480e8d9be63ee566dc02
8a33b588ee385a9969d09,PodSandboxId:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202240804193061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2,PodSandboxId:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716202240754819007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4,PodSandboxId:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716202240711622122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccc0ff58631642554d2c4dda79b9063f12efce048682c58a10f62aca1d1bdd6,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716202240689099592,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716202240592985727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52,PodSandboxId:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202235709951425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\
":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716201751816995551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernet
es.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548128249776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548090990616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716201546520879818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716201526502389608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1716201526517116832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87501057-6848-40b7-ab04-0827fc64ba99 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.007716525Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ff2bbcbc-7177-479f-ae69-2a67f7474e44 name=/runtime.v1.ImageService/ListImages
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.008111911Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,RepoTags:[registry.k8s.io/kube-apiserver:v1.30.1],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c],Size_:117601759,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52 registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027],Size_:112170310,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{
Id:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,RepoTags:[registry.k8s.io/kube-scheduler:v1.30.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036 registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973],Size_:63026504,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,RepoTags:[registry.k8s.io/kube-proxy:v1.30.1],RepoDigests:[registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c],Size_:85933465,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 re
gistry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d8
67d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,RepoTags:[docker.io/kindest/kindnetd:v20240202-8f1494ea],RepoDigests:[docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988 docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac],Size_:65291810,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kub
e-vip@sha256:7eb725aff32fd4b31484f6e8e44b538f8403ebc8bd4218ea0ec28218682afff1],Size_:49570267,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=ff2bbcbc-7177-479f-ae69-2a67f7474e44 name=/runtime.v1.ImageService/ListImages
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.049263284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=addd7b28-22f5-4562-8c3f-0f4e049a546d name=/runtime.v1.RuntimeService/Version
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.049378004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=addd7b28-22f5-4562-8c3f-0f4e049a546d name=/runtime.v1.RuntimeService/Version
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.050500788Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94f61351-5c86-44dc-a5fd-b60a38f58dfb name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.051353093Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202414051328688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94f61351-5c86-44dc-a5fd-b60a38f58dfb name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.051999504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14db7a1e-28c3-44f7-bce4-d4ccd19dee93 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.052075602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14db7a1e-28c3-44f7-bce4-d4ccd19dee93 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.052461112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbd089079a46f63ff820ac71583c01e220f2c5761930e7999ca1b9b68d99553c,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716202329824326685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a408e291c1c58bf1b86d09aeb934f7ffc434a794f200f996ac648a948fb747,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716202289824302946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716202283824088424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1abea911944eec26d1914472f2ae123f18b3bfbfaeb80b6d7349faa0ef068d9,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716202281824156776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716202275829793342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a133506868eb80a3921a605af4867a241ec37844a0d40c6d7929551f6036748,PodSandboxId:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716202274235812612,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970cae7e58e20a58e4a502d5224ff66dffa08257d45b657546df98ba24596cb4,PodSandboxId:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716202255127800238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:af1ea8e41b729b4bfd6dd51080f4c8595b5de450fe04bd402819d41665abb73e,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716202240934212888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2345cda026
db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a,PodSandboxId:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716202240773018227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1428c77c0501a4cf5bf9de480e8d9be63ee566dc02
8a33b588ee385a9969d09,PodSandboxId:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202240804193061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2,PodSandboxId:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716202240754819007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4,PodSandboxId:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716202240711622122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccc0ff58631642554d2c4dda79b9063f12efce048682c58a10f62aca1d1bdd6,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716202240689099592,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716202240592985727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52,PodSandboxId:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202235709951425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\
":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716201751816995551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernet
es.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548128249776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548090990616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716201546520879818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716201526502389608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1716201526517116832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14db7a1e-28c3-44f7-bce4-d4ccd19dee93 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.098407283Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4208495-51c0-4876-9266-d67a24ca0af3 name=/runtime.v1.RuntimeService/Version
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.098501104Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4208495-51c0-4876-9266-d67a24ca0af3 name=/runtime.v1.RuntimeService/Version
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.100208174Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5128772-722b-4c79-9f64-4fd1747f4e8f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.100839795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202414100814658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5128772-722b-4c79-9f64-4fd1747f4e8f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.101522060Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8eb8b0fa-8c3d-461f-99f6-9daa97fbcbe2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.101594652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8eb8b0fa-8c3d-461f-99f6-9daa97fbcbe2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:34 ha-570850 crio[3788]: time="2024-05-20 10:53:34.102069844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbd089079a46f63ff820ac71583c01e220f2c5761930e7999ca1b9b68d99553c,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716202329824326685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a408e291c1c58bf1b86d09aeb934f7ffc434a794f200f996ac648a948fb747,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716202289824302946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716202283824088424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1abea911944eec26d1914472f2ae123f18b3bfbfaeb80b6d7349faa0ef068d9,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716202281824156776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716202275829793342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a133506868eb80a3921a605af4867a241ec37844a0d40c6d7929551f6036748,PodSandboxId:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716202274235812612,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970cae7e58e20a58e4a502d5224ff66dffa08257d45b657546df98ba24596cb4,PodSandboxId:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716202255127800238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:af1ea8e41b729b4bfd6dd51080f4c8595b5de450fe04bd402819d41665abb73e,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716202240934212888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2345cda026
db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a,PodSandboxId:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716202240773018227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1428c77c0501a4cf5bf9de480e8d9be63ee566dc02
8a33b588ee385a9969d09,PodSandboxId:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202240804193061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2,PodSandboxId:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716202240754819007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4,PodSandboxId:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716202240711622122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccc0ff58631642554d2c4dda79b9063f12efce048682c58a10f62aca1d1bdd6,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716202240689099592,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716202240592985727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52,PodSandboxId:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202235709951425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\
":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716201751816995551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernet
es.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548128249776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548090990616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716201546520879818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716201526502389608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1716201526517116832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8eb8b0fa-8c3d-461f-99f6-9daa97fbcbe2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fbd089079a46f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   6a973dfea8b68       storage-provisioner
	44a408e291c1c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Running             kindnet-cni               3                   ccad4bbeb746b       kindnet-2zphj
	cbf4148192834       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Running             kube-controller-manager   2                   1d8c51dfd18bd       kube-controller-manager-ha-570850
	b1abea911944e       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Running             kube-apiserver            3                   eb560a5a0f57a       kube-apiserver-ha-570850
	2f0638b0a11ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   6a973dfea8b68       storage-provisioner
	4a133506868eb       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   7e79c547e58b0       busybox-fc5497c4f-q9r5v
	970cae7e58e20       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   e59330b11d714       kube-vip-ha-570850
	af1ea8e41b729       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   ccad4bbeb746b       kindnet-2zphj
	e1428c77c0501       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   f741cd63d2153       coredns-7db6d8ff4d-jw77g
	2345cda026db0       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      2 minutes ago        Running             kube-proxy                1                   760a55f5a3898       kube-proxy-dnhm6
	8478dc10333b9       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      2 minutes ago        Running             kube-scheduler            1                   e837ff20e95ab       kube-scheduler-ha-570850
	cb5165edef082       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   880da8bf8800b       etcd-ha-570850
	0ccc0ff586316       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Exited              kube-apiserver            2                   eb560a5a0f57a       kube-apiserver-ha-570850
	77bb623d31463       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Exited              kube-controller-manager   1                   1d8c51dfd18bd       kube-controller-manager-ha-570850
	0ce23a0de515f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   16a5e205296fb       coredns-7db6d8ff4d-xr965
	7ff013665ebf9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   1b009be6473f8       busybox-fc5497c4f-q9r5v
	90d27af32d37b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   7f39567bf239f       coredns-7db6d8ff4d-jw77g
	db09b659f2af2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   5862a2beb755c       coredns-7db6d8ff4d-xr965
	2ab644050dd47       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      14 minutes ago       Exited              kube-proxy                0                   17d9c116389c0       kube-proxy-dnhm6
	3ed532de1dd4e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   6e5a2b4300b4d       etcd-ha-570850
	5f81e882715f6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      14 minutes ago       Exited              kube-scheduler            0                   760d0e6343aaf       kube-scheduler-ha-570850
	
	
	==> coredns [0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52] <==
	[INFO] plugin/kubernetes: Trace[661069811]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:50:43.854) (total time: 10001ms):
	Trace[661069811]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:50:53.856)
	Trace[661069811]: [10.001577462s] [10.001577462s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1098691614]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:50:48.010) (total time: 10001ms):
	Trace[1098691614]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:50:58.012)
	Trace[1098691614]: [10.001866316s] [10.001866316s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45586->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45586->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d] <==
	[INFO] 10.244.1.2:48903 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120367s
	[INFO] 10.244.1.2:57868 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001671078s
	[INFO] 10.244.1.2:56995 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082609s
	[INFO] 10.244.2.2:58942 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706357s
	[INFO] 10.244.2.2:35126 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090874s
	[INFO] 10.244.2.2:37909 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008474s
	[INFO] 10.244.2.2:51462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166312s
	[INFO] 10.244.2.2:33520 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093626s
	[INFO] 10.244.0.4:59833 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088933s
	[INFO] 10.244.1.2:59628 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093196s
	[INFO] 10.244.1.2:50071 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124886s
	[INFO] 10.244.1.2:42327 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143875s
	[INFO] 10.244.0.4:35557 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115628s
	[INFO] 10.244.0.4:36441 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148453s
	[INFO] 10.244.0.4:52951 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110826s
	[INFO] 10.244.0.4:41170 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075418s
	[INFO] 10.244.1.2:54236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124156s
	[INFO] 10.244.1.2:55716 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008044s
	[INFO] 10.244.1.2:51672 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128329s
	[INFO] 10.244.2.2:44501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195618s
	[INFO] 10.244.2.2:48083 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105613s
	[INFO] 10.244.2.2:41793 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131302s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1920&timeout=5m33s&timeoutSeconds=333&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c] <==
	[INFO] 10.244.2.2:57122 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001355331s
	[INFO] 10.244.0.4:59866 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156011s
	[INFO] 10.244.0.4:56885 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009797641s
	[INFO] 10.244.0.4:37828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138434s
	[INFO] 10.244.1.2:59957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100136s
	[INFO] 10.244.1.2:41914 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180125s
	[INFO] 10.244.1.2:38547 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077887s
	[INFO] 10.244.1.2:35980 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172936s
	[INFO] 10.244.2.2:50204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019701s
	[INFO] 10.244.2.2:42765 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076194s
	[INFO] 10.244.2.2:44033 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001282105s
	[INFO] 10.244.0.4:49359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121025s
	[INFO] 10.244.0.4:41717 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077487s
	[INFO] 10.244.0.4:44192 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062315s
	[INFO] 10.244.1.2:40081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176555s
	[INFO] 10.244.2.2:58201 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150016s
	[INFO] 10.244.2.2:56308 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145342s
	[INFO] 10.244.2.2:36926 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093685s
	[INFO] 10.244.2.2:42968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009681s
	[INFO] 10.244.1.2:34643 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154857s
	[INFO] 10.244.2.2:49012 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281845s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1951&timeout=8m37s&timeoutSeconds=517&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1951&timeout=6m43s&timeoutSeconds=403&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [e1428c77c0501a4cf5bf9de480e8d9be63ee566dc028a33b588ee385a9969d09] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[350754239]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:50:44.690) (total time: 10001ms):
	Trace[350754239]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:50:54.691)
	Trace[350754239]: [10.001535123s] [10.001535123s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43826->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43826->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44156->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44156->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43832->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43832->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-570850
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T10_38_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:38:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:53:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:51:23 +0000   Mon, 20 May 2024 10:38:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:51:23 +0000   Mon, 20 May 2024 10:38:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:51:23 +0000   Mon, 20 May 2024 10:38:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:51:23 +0000   Mon, 20 May 2024 10:39:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-570850
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 328136f6dc1d4de39599ea377c28404e
	  System UUID:                328136f6-dc1d-4de3-9599-ea377c28404e
	  Boot ID:                    bc8aa620-b931-4636-9be0-cbe2c643d897
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q9r5v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-jw77g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-xr965             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-570850                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-2zphj                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-570850             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-570850    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-dnhm6                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-570850             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-570850                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 14m    kube-proxy       
	  Normal   Starting                 2m10s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m    kubelet          Node ha-570850 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m    kubelet          Node ha-570850 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m    kubelet          Node ha-570850 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m    node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Normal   NodeReady                14m    kubelet          Node ha-570850 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Normal   RegisteredNode           11m    node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Warning  ContainerGCFailed        3m42s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m6s   node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Normal   RegisteredNode           118s   node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Normal   RegisteredNode           23s    node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	
	
	Name:               ha-570850-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T10_41_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:40:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:53:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:52:05 +0000   Mon, 20 May 2024 10:51:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:52:05 +0000   Mon, 20 May 2024 10:51:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:52:05 +0000   Mon, 20 May 2024 10:51:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:52:05 +0000   Mon, 20 May 2024 10:51:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-570850-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 679a56af7395441fb77f57941486fb9b
	  System UUID:                679a56af-7395-441f-b77f-57941486fb9b
	  Boot ID:                    752295ac-6834-4f74-b2f2-eb38f1212afe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wpv4q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-570850-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-f5x4s                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-570850-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-570850-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-6s5x7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-570850-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-570850-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 108s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-570850-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-570850-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-570850-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  NodeNotReady             9m19s                  node-controller  Node ha-570850-m02 status is now: NodeNotReady
	  Normal  Starting                 2m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node ha-570850-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node ha-570850-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m38s (x7 over 2m38s)  kubelet          Node ha-570850-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m6s                   node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  RegisteredNode           118s                   node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  RegisteredNode           23s                    node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	
	
	Name:               ha-570850-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T10_42_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:42:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:53:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:53:12 +0000   Mon, 20 May 2024 10:52:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:53:12 +0000   Mon, 20 May 2024 10:52:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:53:12 +0000   Mon, 20 May 2024 10:52:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:53:12 +0000   Mon, 20 May 2024 10:52:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    ha-570850-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dbb5ce07c2d4d77b22b999a94184ed3
	  System UUID:                1dbb5ce0-7c2d-4d77-b22b-999a94184ed3
	  Boot ID:                    5493d3b6-03d9-493d-882c-f7baf5e32228
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-x2wtn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-570850-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-6tkd5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-570850-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-570850-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-n5std                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-570850-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-570850-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 35s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-570850-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-570850-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-570850-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-570850-m03 event: Registered Node ha-570850-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-570850-m03 event: Registered Node ha-570850-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-570850-m03 event: Registered Node ha-570850-m03 in Controller
	  Normal   RegisteredNode           2m6s               node-controller  Node ha-570850-m03 event: Registered Node ha-570850-m03 in Controller
	  Normal   RegisteredNode           118s               node-controller  Node ha-570850-m03 event: Registered Node ha-570850-m03 in Controller
	  Normal   NodeNotReady             86s                node-controller  Node ha-570850-m03 status is now: NodeNotReady
	  Normal   Starting                 53s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  53s (x2 over 53s)  kubelet          Node ha-570850-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    53s (x2 over 53s)  kubelet          Node ha-570850-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     53s (x2 over 53s)  kubelet          Node ha-570850-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 53s                kubelet          Node ha-570850-m03 has been rebooted, boot id: 5493d3b6-03d9-493d-882c-f7baf5e32228
	  Normal   NodeReady                53s                kubelet          Node ha-570850-m03 status is now: NodeReady
	  Normal   RegisteredNode           23s                node-controller  Node ha-570850-m03 event: Registered Node ha-570850-m03 in Controller
	
	
	Name:               ha-570850-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T10_43_09_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:43:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:46:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    ha-570850-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14f2f8926106479fbb09b362a7c92c83
	  System UUID:                14f2f892-6106-479f-bb09-b362a7c92c83
	  Boot ID:                    2b747312-6091-4a44-bf3d-81557c80cae1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-p9s2j       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-xt8ht    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-570850-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-570850-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-570850-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           10m                node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-570850-m04 status is now: NodeReady
	  Normal  RegisteredNode           2m6s               node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  RegisteredNode           118s               node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  NodeNotReady             86s                node-controller  Node ha-570850-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           23s                node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.905313] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.066522] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063920] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.182694] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.123506] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.262222] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.187463] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +4.199064] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.059504] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.313134] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.088050] kauditd_printk_skb: 79 callbacks suppressed
	[May20 10:39] kauditd_printk_skb: 21 callbacks suppressed
	[May20 10:41] kauditd_printk_skb: 72 callbacks suppressed
	[May20 10:50] systemd-fstab-generator[3707]: Ignoring "noauto" option for root device
	[  +0.137290] systemd-fstab-generator[3719]: Ignoring "noauto" option for root device
	[  +0.175518] systemd-fstab-generator[3733]: Ignoring "noauto" option for root device
	[  +0.150488] systemd-fstab-generator[3745]: Ignoring "noauto" option for root device
	[  +0.300141] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +1.258769] systemd-fstab-generator[3874]: Ignoring "noauto" option for root device
	[  +5.092805] kauditd_printk_skb: 132 callbacks suppressed
	[ +12.503054] kauditd_printk_skb: 76 callbacks suppressed
	[  +7.055284] kauditd_printk_skb: 2 callbacks suppressed
	[May20 10:51] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.452931] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d] <==
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-20T10:49:01.787883Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.152:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T10:49:01.787942Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.152:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T10:49:01.788008Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"900c4b71f7b778f3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-20T10:49:01.788218Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.78829Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.788361Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.788449Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.788523Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.788598Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.78872Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.78875Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.788786Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.78884Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.788972Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.789046Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.789118Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.789147Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.792404Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.152:2380"}
	{"level":"info","ts":"2024-05-20T10:49:01.792487Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.152:2380"}
	{"level":"info","ts":"2024-05-20T10:49:01.792512Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-570850","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.152:2380"],"advertise-client-urls":["https://192.168.39.152:2379"]}
	
	
	==> etcd [cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4] <==
	{"level":"warn","ts":"2024-05-20T10:52:36.538096Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:52:36.573832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:52:36.575743Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:52:36.637292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"900c4b71f7b778f3","from":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-20T10:52:40.135358Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.240:2380/version","remote-member-id":"8a9e64255fae650f","error":"Get \"https://192.168.39.240:2380/version\": dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T10:52:40.13549Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8a9e64255fae650f","error":"Get \"https://192.168.39.240:2380/version\": dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T10:52:41.484451Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8a9e64255fae650f","rtt":"0s","error":"dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T10:52:41.484617Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8a9e64255fae650f","rtt":"0s","error":"dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T10:52:44.137869Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.240:2380/version","remote-member-id":"8a9e64255fae650f","error":"Get \"https://192.168.39.240:2380/version\": dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T10:52:44.138048Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8a9e64255fae650f","error":"Get \"https://192.168.39.240:2380/version\": dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T10:52:46.485859Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8a9e64255fae650f","rtt":"0s","error":"dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T10:52:46.485899Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8a9e64255fae650f","rtt":"0s","error":"dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T10:52:48.13975Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.240:2380/version","remote-member-id":"8a9e64255fae650f","error":"Get \"https://192.168.39.240:2380/version\": dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T10:52:48.139911Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8a9e64255fae650f","error":"Get \"https://192.168.39.240:2380/version\": dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T10:52:50.65859Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.978992ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8715467576689114287 > lease_revoke:<id:78f38f959ffd3c30>","response":"size:29"}
	{"level":"info","ts":"2024-05-20T10:52:51.366812Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:52:51.376587Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:52:51.376882Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:52:51.382892Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"900c4b71f7b778f3","to":"8a9e64255fae650f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-05-20T10:52:51.38308Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:52:51.414965Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"900c4b71f7b778f3","to":"8a9e64255fae650f","stream-type":"stream Message"}
	{"level":"info","ts":"2024-05-20T10:52:51.415106Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"warn","ts":"2024-05-20T10:52:51.485999Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8a9e64255fae650f","rtt":"0s","error":"dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T10:52:51.486322Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8a9e64255fae650f","rtt":"0s","error":"dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"info","ts":"2024-05-20T10:52:55.661887Z","caller":"traceutil/trace.go:171","msg":"trace[1891030076] transaction","detail":"{read_only:false; response_revision:2515; number_of_response:1; }","duration":"116.814274ms","start":"2024-05-20T10:52:55.545034Z","end":"2024-05-20T10:52:55.661848Z","steps":["trace[1891030076] 'process raft request'  (duration: 116.46909ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:53:34 up 15 min,  0 users,  load average: 0.34, 0.55, 0.35
	Linux ha-570850 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [44a408e291c1c58bf1b86d09aeb934f7ffc434a794f200f996ac648a948fb747] <==
	I0520 10:53:00.890975       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	I0520 10:53:10.899850       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0520 10:53:10.900060       1 main.go:227] handling current node
	I0520 10:53:10.900103       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0520 10:53:10.900123       1 main.go:250] Node ha-570850-m02 has CIDR [10.244.1.0/24] 
	I0520 10:53:10.900311       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0520 10:53:10.900333       1 main.go:250] Node ha-570850-m03 has CIDR [10.244.2.0/24] 
	I0520 10:53:10.900400       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0520 10:53:10.900418       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	I0520 10:53:20.916108       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0520 10:53:20.916402       1 main.go:227] handling current node
	I0520 10:53:20.916450       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0520 10:53:20.916472       1 main.go:250] Node ha-570850-m02 has CIDR [10.244.1.0/24] 
	I0520 10:53:20.916625       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0520 10:53:20.916704       1 main.go:250] Node ha-570850-m03 has CIDR [10.244.2.0/24] 
	I0520 10:53:20.916786       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0520 10:53:20.916806       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	I0520 10:53:30.961237       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0520 10:53:30.961278       1 main.go:227] handling current node
	I0520 10:53:30.961289       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0520 10:53:30.961295       1 main.go:250] Node ha-570850-m02 has CIDR [10.244.1.0/24] 
	I0520 10:53:30.961436       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0520 10:53:30.961467       1 main.go:250] Node ha-570850-m03 has CIDR [10.244.2.0/24] 
	I0520 10:53:30.961580       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0520 10:53:30.961613       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [af1ea8e41b729b4bfd6dd51080f4c8595b5de450fe04bd402819d41665abb73e] <==
	I0520 10:50:41.426803       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0520 10:50:43.021158       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 10:50:46.093379       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 10:50:49.165158       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 10:50:52.237070       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 10:50:55.309122       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [0ccc0ff58631642554d2c4dda79b9063f12efce048682c58a10f62aca1d1bdd6] <==
	I0520 10:50:41.382599       1 options.go:221] external host was not specified, using 192.168.39.152
	I0520 10:50:41.384951       1 server.go:148] Version: v1.30.1
	I0520 10:50:41.385034       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:50:42.086743       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0520 10:50:42.087720       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0520 10:50:42.087801       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0520 10:50:42.087964       1 instance.go:299] Using reconciler: lease
	I0520 10:50:42.088406       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0520 10:51:02.081471       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0520 10:51:02.082836       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0520 10:51:02.089260       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b1abea911944eec26d1914472f2ae123f18b3bfbfaeb80b6d7349faa0ef068d9] <==
	I0520 10:51:23.746988       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0520 10:51:23.747005       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0520 10:51:23.747017       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0520 10:51:23.807071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 10:51:23.807228       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 10:51:23.807314       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 10:51:23.807320       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 10:51:23.807429       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 10:51:23.808318       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 10:51:23.809916       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 10:51:23.809998       1 aggregator.go:165] initial CRD sync complete...
	I0520 10:51:23.810045       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 10:51:23.810068       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 10:51:23.810075       1 cache.go:39] Caches are synced for autoregister controller
	I0520 10:51:23.814335       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 10:51:23.832811       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 10:51:23.834572       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 10:51:23.834621       1 policy_source.go:224] refreshing policies
	W0520 10:51:23.839203       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.240]
	I0520 10:51:23.841785       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 10:51:23.862023       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0520 10:51:23.869480       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0520 10:51:23.877096       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 10:51:24.714740       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0520 10:51:25.094443       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.152]
	
	
	==> kube-controller-manager [77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc] <==
	I0520 10:50:41.928307       1 serving.go:380] Generated self-signed cert in-memory
	I0520 10:50:42.230214       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0520 10:50:42.230314       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:50:42.231880       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0520 10:50:42.232770       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 10:50:42.232926       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 10:50:42.233019       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0520 10:51:03.094351       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.152:8443/healthz\": dial tcp 192.168.39.152:8443: connect: connection refused"
	
	
	==> kube-controller-manager [cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9] <==
	I0520 10:51:36.069954       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0520 10:51:36.188332       1 shared_informer.go:320] Caches are synced for disruption
	I0520 10:51:36.191160       1 shared_informer.go:320] Caches are synced for stateful set
	I0520 10:51:36.223620       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 10:51:36.232924       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 10:51:36.658601       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 10:51:36.658681       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 10:51:36.661302       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 10:51:43.734248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="598.132µs"
	I0520 10:51:46.026770       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-7rrwn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-7rrwn\": the object has been modified; please apply your changes to the latest version and try again"
	I0520 10:51:46.027026       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"1274fdc6-2bfb-4a11-be9a-6c47b12678e1", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-7rrwn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-7rrwn": the object has been modified; please apply your changes to the latest version and try again
	I0520 10:51:46.046520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.773336ms"
	I0520 10:51:46.046887       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="195.807µs"
	I0520 10:51:49.057445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.097677ms"
	I0520 10:51:49.058101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.619µs"
	I0520 10:52:07.953022       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-7rrwn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-7rrwn\": the object has been modified; please apply your changes to the latest version and try again"
	I0520 10:52:07.956240       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"1274fdc6-2bfb-4a11-be9a-6c47b12678e1", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-7rrwn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-7rrwn": the object has been modified; please apply your changes to the latest version and try again
	I0520 10:52:07.978032       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.7686ms"
	I0520 10:52:07.978246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="96.656µs"
	I0520 10:52:08.858380       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-570850-m04"
	I0520 10:52:09.058775       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.886522ms"
	I0520 10:52:09.059624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.642µs"
	I0520 10:52:42.696585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.709µs"
	I0520 10:53:02.043031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.675188ms"
	I0520 10:53:02.043311       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.865µs"
	
	
	==> kube-proxy [2345cda026db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a] <==
	I0520 10:50:42.252548       1 server_linux.go:69] "Using iptables proxy"
	E0520 10:50:42.702920       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 10:50:45.773173       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 10:50:48.845302       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 10:50:54.990510       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 10:51:07.278218       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0520 10:51:23.652078       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.152"]
	I0520 10:51:23.792928       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 10:51:23.793026       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 10:51:23.793049       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:51:23.795600       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:51:23.796001       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:51:23.796042       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:51:23.797889       1 config.go:192] "Starting service config controller"
	I0520 10:51:23.797947       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:51:23.797983       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:51:23.798003       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:51:23.798826       1 config.go:319] "Starting node config controller"
	I0520 10:51:23.798873       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:51:23.898722       1 shared_informer.go:320] Caches are synced for service config
	I0520 10:51:23.898763       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 10:51:23.899076       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0] <==
	E0520 10:47:56.817798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:47:59.886237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:47:59.886296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:47:59.886743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:47:59.886788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:02.958465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:02.958522       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:06.030750       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:06.030910       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:06.031048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:06.031088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:09.102907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:09.103139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:18.318620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:18.318779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:18.318918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:18.318956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:18.318989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:18.319040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:33.678915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:33.678996       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:39.822265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:39.822374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:42.893857       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:42.894425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6] <==
	W0520 10:48:57.725415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 10:48:57.725683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 10:48:57.740415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 10:48:57.740553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 10:48:57.787073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:48:57.787239       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 10:48:57.832834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 10:48:57.832960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 10:48:58.121359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 10:48:58.121848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 10:48:58.122808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:48:58.122897       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:48:58.211914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:48:58.212150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:48:58.356603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:48:58.356831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 10:48:58.487408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 10:48:58.487464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 10:49:01.242093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 10:49:01.242122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 10:49:01.323331       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 10:49:01.323366       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 10:49:01.451021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:49:01.451066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:49:01.692998       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2] <==
	W0520 10:51:18.574460       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.152:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:18.574601       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.152:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:19.115455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.152:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:19.115536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.152:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:19.578422       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.152:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:19.578548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.152:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:19.609240       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.152:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:19.609313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.152:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:19.627241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.152:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:19.627303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.152:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:20.354183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.152:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:20.354245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.152:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:20.959906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.152:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:20.959963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.152:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:21.539782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.152:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:21.539834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.152:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:23.747175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:51:23.747232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:51:23.747319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 10:51:23.747353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 10:51:23.747400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:51:23.747438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 10:51:23.755534       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 10:51:23.755585       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0520 10:51:44.909303       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 10:51:22 ha-570850 kubelet[1371]: E0520 10:51:22.638255    1371 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1929": dial tcp 192.168.39.254:8443: connect: no route to host
	May 20 10:51:22 ha-570850 kubelet[1371]: I0520 10:51:22.638364    1371 status_manager.go:853] "Failed to get status for pod" podUID="43a157a6ef6f6adedca3ec1b2efae360" pod="kube-system/kube-vip-ha-570850" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 20 10:51:22 ha-570850 kubelet[1371]: E0520 10:51:22.639043    1371 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-570850.17d12ca641408e43\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-570850.17d12ca641408e43  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-570850,UID:701ae10dab129a58e87d4c5965df6fc9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-570850,},FirstTimestamp:2024-05-20 10:47:05.575812675 +0000 UTC m=+492.887032054,LastTimestamp:2024-05-20 10:47:09.582314636 +0000 UTC m=+496.893534016,Count:2,Type:Warning,EventTime:0001-01-01 0
0:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-570850,}"
	May 20 10:51:22 ha-570850 kubelet[1371]: I0520 10:51:22.989997    1371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-q9r5v" podStartSLOduration=533.850641132 podStartE2EDuration="8m55.989952505s" podCreationTimestamp="2024-05-20 10:42:27 +0000 UTC" firstStartedPulling="2024-05-20 10:42:29.655152044 +0000 UTC m=+216.966371422" lastFinishedPulling="2024-05-20 10:42:31.794463418 +0000 UTC m=+219.105682795" observedRunningTime="2024-05-20 10:42:32.770012536 +0000 UTC m=+220.081231934" watchObservedRunningTime="2024-05-20 10:51:22.989952505 +0000 UTC m=+750.301171903"
	May 20 10:51:23 ha-570850 kubelet[1371]: I0520 10:51:23.813840    1371 scope.go:117] "RemoveContainer" containerID="77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc"
	May 20 10:51:29 ha-570850 kubelet[1371]: I0520 10:51:29.814080    1371 scope.go:117] "RemoveContainer" containerID="2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35"
	May 20 10:51:29 ha-570850 kubelet[1371]: E0520 10:51:29.814264    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8cbdeaca-631d-449f-bb63-9ee4ec3f7a36)\"" pod="kube-system/storage-provisioner" podUID="8cbdeaca-631d-449f-bb63-9ee4ec3f7a36"
	May 20 10:51:29 ha-570850 kubelet[1371]: I0520 10:51:29.814497    1371 scope.go:117] "RemoveContainer" containerID="af1ea8e41b729b4bfd6dd51080f4c8595b5de450fe04bd402819d41665abb73e"
	May 20 10:51:43 ha-570850 kubelet[1371]: I0520 10:51:43.814117    1371 scope.go:117] "RemoveContainer" containerID="2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35"
	May 20 10:51:43 ha-570850 kubelet[1371]: E0520 10:51:43.814432    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8cbdeaca-631d-449f-bb63-9ee4ec3f7a36)\"" pod="kube-system/storage-provisioner" podUID="8cbdeaca-631d-449f-bb63-9ee4ec3f7a36"
	May 20 10:51:52 ha-570850 kubelet[1371]: E0520 10:51:52.854536    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:51:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:51:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:51:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:51:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:51:54 ha-570850 kubelet[1371]: I0520 10:51:54.813765    1371 scope.go:117] "RemoveContainer" containerID="2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35"
	May 20 10:51:54 ha-570850 kubelet[1371]: E0520 10:51:54.815619    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8cbdeaca-631d-449f-bb63-9ee4ec3f7a36)\"" pod="kube-system/storage-provisioner" podUID="8cbdeaca-631d-449f-bb63-9ee4ec3f7a36"
	May 20 10:52:09 ha-570850 kubelet[1371]: I0520 10:52:09.813279    1371 scope.go:117] "RemoveContainer" containerID="2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35"
	May 20 10:52:20 ha-570850 kubelet[1371]: I0520 10:52:20.814119    1371 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-570850" podUID="f230fe6d-0d6e-4a3b-bbf8-694d256f1fde"
	May 20 10:52:20 ha-570850 kubelet[1371]: I0520 10:52:20.833479    1371 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-570850"
	May 20 10:52:52 ha-570850 kubelet[1371]: E0520 10:52:52.856827    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:52:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:52:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:52:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:52:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 10:53:33.601318   29919 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18925-3819/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-570850 -n ha-570850
helpers_test.go:261: (dbg) Run:  kubectl --context ha-570850 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (396.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-570850 node delete m03 -v=7 --alsologtostderr: (16.505241968s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr: exit status 2 (586.446453ms)

                                                
                                                
-- stdout --
	ha-570850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-570850-m04
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:53:52.283053   30197 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:53:52.283287   30197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:53:52.283296   30197 out.go:304] Setting ErrFile to fd 2...
	I0520 10:53:52.283300   30197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:53:52.283479   30197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:53:52.283678   30197 out.go:298] Setting JSON to false
	I0520 10:53:52.283705   30197 mustload.go:65] Loading cluster: ha-570850
	I0520 10:53:52.283806   30197 notify.go:220] Checking for updates...
	I0520 10:53:52.284125   30197 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:53:52.284142   30197 status.go:255] checking status of ha-570850 ...
	I0520 10:53:52.284483   30197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:53:52.284534   30197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:53:52.305362   30197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46747
	I0520 10:53:52.305784   30197 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:53:52.306332   30197 main.go:141] libmachine: Using API Version  1
	I0520 10:53:52.306354   30197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:53:52.306660   30197 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:53:52.306849   30197 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:53:52.308686   30197 status.go:330] ha-570850 host status = "Running" (err=<nil>)
	I0520 10:53:52.308705   30197 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:53:52.309014   30197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:53:52.309047   30197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:53:52.323803   30197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38827
	I0520 10:53:52.324195   30197 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:53:52.324601   30197 main.go:141] libmachine: Using API Version  1
	I0520 10:53:52.324623   30197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:53:52.324916   30197 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:53:52.325128   30197 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:53:52.327832   30197 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:53:52.328424   30197 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:53:52.328460   30197 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:53:52.328562   30197 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:53:52.328874   30197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:53:52.328948   30197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:53:52.345115   30197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43229
	I0520 10:53:52.345506   30197 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:53:52.345991   30197 main.go:141] libmachine: Using API Version  1
	I0520 10:53:52.346014   30197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:53:52.346386   30197 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:53:52.346582   30197 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:53:52.346782   30197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:53:52.346811   30197 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:53:52.349667   30197 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:53:52.350201   30197 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:53:52.350232   30197 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:53:52.350327   30197 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:53:52.350549   30197 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:53:52.350719   30197 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:53:52.350862   30197 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:53:52.432680   30197 ssh_runner.go:195] Run: systemctl --version
	I0520 10:53:52.439131   30197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:53:52.454250   30197 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:53:52.454277   30197 api_server.go:166] Checking apiserver status ...
	I0520 10:53:52.454307   30197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:53:52.469112   30197 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5088/cgroup
	W0520 10:53:52.479194   30197 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5088/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:53:52.479247   30197 ssh_runner.go:195] Run: ls
	I0520 10:53:52.483881   30197 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:53:52.488009   30197 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:53:52.488027   30197 status.go:422] ha-570850 apiserver status = Running (err=<nil>)
	I0520 10:53:52.488040   30197 status.go:257] ha-570850 status: &{Name:ha-570850 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:53:52.488069   30197 status.go:255] checking status of ha-570850-m02 ...
	I0520 10:53:52.488457   30197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:53:52.488499   30197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:53:52.504103   30197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42269
	I0520 10:53:52.504542   30197 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:53:52.505120   30197 main.go:141] libmachine: Using API Version  1
	I0520 10:53:52.505148   30197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:53:52.505427   30197 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:53:52.505616   30197 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:53:52.507191   30197 status.go:330] ha-570850-m02 host status = "Running" (err=<nil>)
	I0520 10:53:52.507203   30197 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:53:52.507471   30197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:53:52.507501   30197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:53:52.522401   30197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0520 10:53:52.522761   30197 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:53:52.523232   30197 main.go:141] libmachine: Using API Version  1
	I0520 10:53:52.523251   30197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:53:52.523571   30197 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:53:52.523776   30197 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:53:52.526305   30197 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:53:52.526700   30197 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:50:46 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:53:52.526727   30197 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:53:52.526851   30197 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:53:52.527133   30197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:53:52.527165   30197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:53:52.541468   30197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I0520 10:53:52.541861   30197 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:53:52.542332   30197 main.go:141] libmachine: Using API Version  1
	I0520 10:53:52.542353   30197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:53:52.542633   30197 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:53:52.542804   30197 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:53:52.543000   30197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:53:52.543022   30197 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:53:52.545820   30197 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:53:52.546237   30197 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:50:46 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:53:52.546256   30197 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:53:52.546391   30197 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:53:52.546552   30197 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:53:52.546681   30197 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:53:52.546795   30197 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	I0520 10:53:52.628289   30197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:53:52.646400   30197 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:53:52.646424   30197 api_server.go:166] Checking apiserver status ...
	I0520 10:53:52.646453   30197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:53:52.662789   30197 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1588/cgroup
	W0520 10:53:52.673131   30197 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1588/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:53:52.673175   30197 ssh_runner.go:195] Run: ls
	I0520 10:53:52.677731   30197 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:53:52.681927   30197 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0520 10:53:52.681947   30197 status.go:422] ha-570850-m02 apiserver status = Running (err=<nil>)
	I0520 10:53:52.681954   30197 status.go:257] ha-570850-m02 status: &{Name:ha-570850-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:53:52.681967   30197 status.go:255] checking status of ha-570850-m04 ...
	I0520 10:53:52.682237   30197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:53:52.682274   30197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:53:52.697535   30197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0520 10:53:52.697928   30197 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:53:52.698412   30197 main.go:141] libmachine: Using API Version  1
	I0520 10:53:52.698437   30197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:53:52.698745   30197 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:53:52.698944   30197 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:53:52.700423   30197 status.go:330] ha-570850-m04 host status = "Running" (err=<nil>)
	I0520 10:53:52.700441   30197 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:53:52.700714   30197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:53:52.700750   30197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:53:52.716019   30197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33263
	I0520 10:53:52.716497   30197 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:53:52.717071   30197 main.go:141] libmachine: Using API Version  1
	I0520 10:53:52.717092   30197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:53:52.717465   30197 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:53:52.717658   30197 main.go:141] libmachine: (ha-570850-m04) Calling .GetIP
	I0520 10:53:52.720234   30197 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:53:52.720667   30197 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:53:25 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:53:52.720702   30197 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:53:52.720835   30197 host.go:66] Checking if "ha-570850-m04" exists ...
	I0520 10:53:52.721228   30197 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:53:52.721278   30197 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:53:52.735464   30197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0520 10:53:52.735812   30197 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:53:52.736265   30197 main.go:141] libmachine: Using API Version  1
	I0520 10:53:52.736294   30197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:53:52.736590   30197 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:53:52.736758   30197 main.go:141] libmachine: (ha-570850-m04) Calling .DriverName
	I0520 10:53:52.736923   30197 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:53:52.736961   30197 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHHostname
	I0520 10:53:52.739425   30197 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:53:52.739803   30197 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:53:25 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:53:52.739822   30197 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:53:52.739955   30197 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHPort
	I0520 10:53:52.740113   30197 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHKeyPath
	I0520 10:53:52.740246   30197 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHUsername
	I0520 10:53:52.740340   30197 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m04/id_rsa Username:docker}
	I0520 10:53:52.816308   30197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:53:52.830094   30197 status.go:257] ha-570850-m04 status: &{Name:ha-570850-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-570850 -n ha-570850
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-570850 logs -n 25: (1.673084898s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m02 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m03_ha-570850-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04:/home/docker/cp-test_ha-570850-m03_ha-570850-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m04 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m03_ha-570850-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp testdata/cp-test.txt                                                | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2002648180/001/cp-test_ha-570850-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850:/home/docker/cp-test_ha-570850-m04_ha-570850.txt                       |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850 sudo cat                                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850.txt                                 |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m02:/home/docker/cp-test_ha-570850-m04_ha-570850-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m02 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03:/home/docker/cp-test_ha-570850-m04_ha-570850-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m03 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-570850 node stop m02 -v=7                                                     | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-570850 node start m02 -v=7                                                    | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-570850 -v=7                                                           | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-570850 -v=7                                                                | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-570850 --wait=true -v=7                                                    | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-570850                                                                | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:53 UTC |                     |
	| node    | ha-570850 node delete m03 -v=7                                                   | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:53 UTC | 20 May 24 10:53 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:49:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:49:00.810311   28545 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:49:00.810557   28545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:49:00.810568   28545 out.go:304] Setting ErrFile to fd 2...
	I0520 10:49:00.810574   28545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:49:00.810762   28545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:49:00.811306   28545 out.go:298] Setting JSON to false
	I0520 10:49:00.812227   28545 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1884,"bootTime":1716200257,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 10:49:00.812278   28545 start.go:139] virtualization: kvm guest
	I0520 10:49:00.814785   28545 out.go:177] * [ha-570850] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 10:49:00.816402   28545 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:49:00.816404   28545 notify.go:220] Checking for updates...
	I0520 10:49:00.817786   28545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:49:00.819143   28545 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:49:00.820484   28545 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:49:00.821703   28545 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 10:49:00.822978   28545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:49:00.824648   28545 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:49:00.824745   28545 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:49:00.825258   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:49:00.825308   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:49:00.841522   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I0520 10:49:00.841935   28545 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:49:00.842526   28545 main.go:141] libmachine: Using API Version  1
	I0520 10:49:00.842555   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:49:00.842946   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:49:00.843128   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:49:00.877361   28545 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 10:49:00.878748   28545 start.go:297] selected driver: kvm2
	I0520 10:49:00.878771   28545 start.go:901] validating driver "kvm2" against &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:49:00.878981   28545 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:49:00.879321   28545 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:49:00.879400   28545 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 10:49:00.893725   28545 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 10:49:00.894355   28545 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:49:00.894384   28545 cni.go:84] Creating CNI manager for ""
	I0520 10:49:00.894392   28545 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 10:49:00.894441   28545 start.go:340] cluster config:
	{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:49:00.894551   28545 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:49:00.896612   28545 out.go:177] * Starting "ha-570850" primary control-plane node in "ha-570850" cluster
	I0520 10:49:00.897801   28545 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:49:00.897833   28545 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 10:49:00.897846   28545 cache.go:56] Caching tarball of preloaded images
	I0520 10:49:00.897966   28545 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 10:49:00.897980   28545 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:49:00.898144   28545 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:49:00.898400   28545 start.go:360] acquireMachinesLock for ha-570850: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 10:49:00.898447   28545 start.go:364] duration metric: took 28.765µs to acquireMachinesLock for "ha-570850"
	I0520 10:49:00.898467   28545 start.go:96] Skipping create...Using existing machine configuration
	I0520 10:49:00.898476   28545 fix.go:54] fixHost starting: 
	I0520 10:49:00.898758   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:49:00.898808   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:49:00.912154   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0520 10:49:00.912545   28545 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:49:00.913059   28545 main.go:141] libmachine: Using API Version  1
	I0520 10:49:00.913080   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:49:00.913373   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:49:00.913661   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:49:00.913859   28545 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:49:00.915287   28545 fix.go:112] recreateIfNeeded on ha-570850: state=Running err=<nil>
	W0520 10:49:00.915306   28545 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 10:49:00.917201   28545 out.go:177] * Updating the running kvm2 "ha-570850" VM ...
	I0520 10:49:00.918411   28545 machine.go:94] provisionDockerMachine start ...
	I0520 10:49:00.918427   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:49:00.918614   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:00.921039   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:00.921498   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:00.921531   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:00.921695   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:00.921876   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:00.922030   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:00.922197   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:00.922398   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:00.922593   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:00.922606   28545 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 10:49:01.030186   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850
	
	I0520 10:49:01.030212   28545 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:49:01.030430   28545 buildroot.go:166] provisioning hostname "ha-570850"
	I0520 10:49:01.030458   28545 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:49:01.030664   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.033274   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.033744   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.033768   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.033918   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.034109   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.034287   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.034442   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.034599   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:01.034836   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:01.034853   28545 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-570850 && echo "ha-570850" | sudo tee /etc/hostname
	I0520 10:49:01.156409   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850
	
	I0520 10:49:01.156435   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.159025   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.159417   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.159442   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.159603   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.159765   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.159884   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.159977   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.160116   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:01.160263   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:01.160277   28545 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-570850' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-570850/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-570850' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:49:01.267106   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:49:01.267139   28545 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 10:49:01.267167   28545 buildroot.go:174] setting up certificates
	I0520 10:49:01.267174   28545 provision.go:84] configureAuth start
	I0520 10:49:01.267182   28545 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:49:01.267438   28545 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:49:01.270043   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.270426   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.270462   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.270636   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.272701   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.273087   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.273114   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.273209   28545 provision.go:143] copyHostCerts
	I0520 10:49:01.273239   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:49:01.273270   28545 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 10:49:01.273284   28545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:49:01.273361   28545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 10:49:01.273463   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:49:01.273485   28545 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 10:49:01.273492   28545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:49:01.273528   28545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 10:49:01.273608   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:49:01.273631   28545 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 10:49:01.273640   28545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:49:01.273674   28545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 10:49:01.273756   28545 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.ha-570850 san=[127.0.0.1 192.168.39.152 ha-570850 localhost minikube]
	I0520 10:49:01.408798   28545 provision.go:177] copyRemoteCerts
	I0520 10:49:01.408849   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:49:01.408885   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.411703   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.412078   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.412113   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.412257   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.412414   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.412576   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.412668   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:49:01.496371   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 10:49:01.496434   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 10:49:01.524309   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 10:49:01.524367   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:49:01.550409   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 10:49:01.550477   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0520 10:49:01.574937   28545 provision.go:87] duration metric: took 307.75201ms to configureAuth
	I0520 10:49:01.574962   28545 buildroot.go:189] setting minikube options for container-runtime
	I0520 10:49:01.575171   28545 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:49:01.575233   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.577826   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.578239   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.578268   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.578468   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.578697   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.578855   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.579015   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.579170   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:01.579333   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:01.579356   28545 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:50:32.380185   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:50:32.380214   28545 machine.go:97] duration metric: took 1m31.461789949s to provisionDockerMachine
	I0520 10:50:32.380235   28545 start.go:293] postStartSetup for "ha-570850" (driver="kvm2")
	I0520 10:50:32.380249   28545 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:50:32.380272   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.380603   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:50:32.380647   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.383513   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.383918   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.383943   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.384065   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.384230   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.384381   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.384523   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:50:32.468651   28545 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:50:32.473075   28545 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 10:50:32.473099   28545 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 10:50:32.473170   28545 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 10:50:32.473245   28545 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 10:50:32.473254   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /etc/ssl/certs/111622.pem
	I0520 10:50:32.473344   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 10:50:32.483198   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:50:32.507373   28545 start.go:296] duration metric: took 127.126357ms for postStartSetup
	I0520 10:50:32.507413   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.507673   28545 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0520 10:50:32.507693   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.510108   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.510473   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.510497   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.510642   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.510839   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.510983   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.511169   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	W0520 10:50:32.595598   28545 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0520 10:50:32.595620   28545 fix.go:56] duration metric: took 1m31.697144912s for fixHost
	I0520 10:50:32.595647   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.598084   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.598387   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.598424   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.598549   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.598816   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.599002   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.599162   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.599307   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:50:32.599465   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:50:32.599475   28545 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 10:50:32.706542   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716202232.681128286
	
	I0520 10:50:32.706566   28545 fix.go:216] guest clock: 1716202232.681128286
	I0520 10:50:32.706575   28545 fix.go:229] Guest: 2024-05-20 10:50:32.681128286 +0000 UTC Remote: 2024-05-20 10:50:32.595625893 +0000 UTC m=+91.817262454 (delta=85.502393ms)
	I0520 10:50:32.706607   28545 fix.go:200] guest clock delta is within tolerance: 85.502393ms
	I0520 10:50:32.706616   28545 start.go:83] releasing machines lock for "ha-570850", held for 1m31.808156278s
	I0520 10:50:32.706633   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.706850   28545 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:50:32.709202   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.709544   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.709569   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.709734   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.710174   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.710331   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.710424   28545 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:50:32.710473   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.710479   28545 ssh_runner.go:195] Run: cat /version.json
	I0520 10:50:32.710495   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.712748   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713063   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713193   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.713216   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713322   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.713480   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.713491   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.713507   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713666   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.713676   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.713846   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.713854   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:50:32.713963   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.714072   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	W0520 10:50:32.810770   28545 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 10:50:32.810863   28545 ssh_runner.go:195] Run: systemctl --version
	I0520 10:50:32.817315   28545 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:50:32.989007   28545 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 10:50:32.995244   28545 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 10:50:32.995308   28545 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:50:33.005340   28545 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 10:50:33.005361   28545 start.go:494] detecting cgroup driver to use...
	I0520 10:50:33.005418   28545 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:50:33.023319   28545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:50:33.037817   28545 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:50:33.037877   28545 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:50:33.052368   28545 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:50:33.066569   28545 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:50:33.215546   28545 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:50:33.358676   28545 docker.go:233] disabling docker service ...
	I0520 10:50:33.358732   28545 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:50:33.375689   28545 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:50:33.389459   28545 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:50:33.538495   28545 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:50:33.698476   28545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:50:33.715452   28545 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:50:33.734625   28545 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:50:33.734681   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.745464   28545 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:50:33.745532   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.756460   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.767137   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.777739   28545 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:50:33.788287   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.798513   28545 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.809204   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.819733   28545 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:50:33.829957   28545 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:50:33.839650   28545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:50:33.982271   28545 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:50:34.731686   28545 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:50:34.731752   28545 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:50:34.736749   28545 start.go:562] Will wait 60s for crictl version
	I0520 10:50:34.736814   28545 ssh_runner.go:195] Run: which crictl
	I0520 10:50:34.740764   28545 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:50:34.788346   28545 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 10:50:34.788434   28545 ssh_runner.go:195] Run: crio --version
	I0520 10:50:34.817827   28545 ssh_runner.go:195] Run: crio --version
	I0520 10:50:34.851583   28545 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 10:50:34.852979   28545 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:50:34.855568   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:34.855886   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:34.855910   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:34.856082   28545 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 10:50:34.860885   28545 kubeadm.go:877] updating cluster {Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 10:50:34.861058   28545 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:50:34.861115   28545 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:50:34.904271   28545 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:50:34.904291   28545 crio.go:433] Images already preloaded, skipping extraction
	I0520 10:50:34.904333   28545 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:50:34.939632   28545 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:50:34.939656   28545 cache_images.go:84] Images are preloaded, skipping loading
	I0520 10:50:34.939667   28545 kubeadm.go:928] updating node { 192.168.39.152 8443 v1.30.1 crio true true} ...
	I0520 10:50:34.939770   28545 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-570850 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:50:34.939868   28545 ssh_runner.go:195] Run: crio config
	I0520 10:50:34.996367   28545 cni.go:84] Creating CNI manager for ""
	I0520 10:50:34.996383   28545 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 10:50:34.996396   28545 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 10:50:34.996422   28545 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-570850 NodeName:ha-570850 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 10:50:34.996550   28545 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-570850"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 10:50:34.996568   28545 kube-vip.go:115] generating kube-vip config ...
	I0520 10:50:34.996616   28545 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 10:50:35.009156   28545 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 10:50:35.009260   28545 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 10:50:35.009315   28545 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:50:35.019441   28545 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 10:50:35.019484   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 10:50:35.029250   28545 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0520 10:50:35.045718   28545 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:50:35.062017   28545 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0520 10:50:35.078117   28545 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 10:50:35.094874   28545 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 10:50:35.099790   28545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:50:35.242816   28545 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:50:35.258353   28545 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850 for IP: 192.168.39.152
	I0520 10:50:35.258374   28545 certs.go:194] generating shared ca certs ...
	I0520 10:50:35.258393   28545 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:50:35.258537   28545 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 10:50:35.258592   28545 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 10:50:35.258615   28545 certs.go:256] generating profile certs ...
	I0520 10:50:35.258705   28545 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key
	I0520 10:50:35.258743   28545 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f
	I0520 10:50:35.258789   28545 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.111 192.168.39.240 192.168.39.254]
	I0520 10:50:35.323349   28545 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f ...
	I0520 10:50:35.323379   28545 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f: {Name:mk80a84971cac50baec0222bf4535f70c6381cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:50:35.323558   28545 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f ...
	I0520 10:50:35.323576   28545 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f: {Name:mk3eecf14b574a3dc74183c4476fb213e98e80f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:50:35.323674   28545 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt
	I0520 10:50:35.323862   28545 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key
	I0520 10:50:35.324034   28545 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key
	I0520 10:50:35.324052   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 10:50:35.324068   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 10:50:35.324091   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 10:50:35.324110   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 10:50:35.324129   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 10:50:35.324149   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 10:50:35.324168   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 10:50:35.324186   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 10:50:35.324246   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 10:50:35.324285   28545 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 10:50:35.324301   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 10:50:35.324334   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:50:35.324365   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:50:35.324401   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 10:50:35.324453   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:50:35.324492   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem -> /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.324512   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.324531   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.325124   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:50:35.350917   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:50:35.375112   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:50:35.398626   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 10:50:35.422688   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 10:50:35.447172   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 10:50:35.470738   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:50:35.553009   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:50:35.610723   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 10:50:35.648271   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 10:50:35.677258   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:50:35.710822   28545 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 10:50:35.733493   28545 ssh_runner.go:195] Run: openssl version
	I0520 10:50:35.740566   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:50:35.753475   28545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.759848   28545 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.759889   28545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.765875   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:50:35.782991   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 10:50:35.798421   28545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.805287   28545 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.805341   28545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.819309   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 10:50:35.829389   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 10:50:35.840391   28545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.844907   28545 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.844994   28545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.850890   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 10:50:35.860742   28545 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:50:35.865350   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 10:50:35.871853   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 10:50:35.878065   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 10:50:35.883542   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 10:50:35.889358   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 10:50:35.895053   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 10:50:35.900431   28545 kubeadm.go:391] StartCluster: {Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:50:35.900524   28545 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 10:50:35.900574   28545 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 10:50:35.939457   28545 cri.go:89] found id: "0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52"
	I0520 10:50:35.939478   28545 cri.go:89] found id: "1554ca782d2e3f7c42b06a3c0e74531de3d26d0ca5fe60fa686a7b6afa191326"
	I0520 10:50:35.939484   28545 cri.go:89] found id: "e07aed17013f0a9702039b0e913a8f3ee47ae3a2c5783ffb5c6d19619cbaf058"
	I0520 10:50:35.939489   28545 cri.go:89] found id: "53da5ef1aad0da9abe35a5f2db0970025b8855cb63c2361b1c617a487c1b0156"
	I0520 10:50:35.939493   28545 cri.go:89] found id: "a996ad2d62ca76ff1ec8d79e5907a3dccdc7fcf6178ff99795bd7bec168652ed"
	I0520 10:50:35.939497   28545 cri.go:89] found id: "5dfd2933bae47652d85545b3c50f13118d2b9b80e0d182e41b38e5e1fab415ee"
	I0520 10:50:35.939501   28545 cri.go:89] found id: "90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d"
	I0520 10:50:35.939504   28545 cri.go:89] found id: "db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c"
	I0520 10:50:35.939507   28545 cri.go:89] found id: "501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141"
	I0520 10:50:35.939513   28545 cri.go:89] found id: "2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0"
	I0520 10:50:35.939517   28545 cri.go:89] found id: "5497e7ab0c98629fad1acf76af134ab8564cfc623f5403e153cfa0fb0f8a2853"
	I0520 10:50:35.939520   28545 cri.go:89] found id: "3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d"
	I0520 10:50:35.939524   28545 cri.go:89] found id: "5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6"
	I0520 10:50:35.939528   28545 cri.go:89] found id: "43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68"
	I0520 10:50:35.939536   28545 cri.go:89] found id: "25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab"
	I0520 10:50:35.939542   28545 cri.go:89] found id: ""
	I0520 10:50:35.939589   28545 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.491530606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202433491508154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efa99d51-3047-474d-84e5-3dd3f9f14e0e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.492292363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35da285c-3f0a-47be-8d69-650cb6fb9c94 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.492345625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35da285c-3f0a-47be-8d69-650cb6fb9c94 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.492814647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbd089079a46f63ff820ac71583c01e220f2c5761930e7999ca1b9b68d99553c,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716202329824326685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a408e291c1c58bf1b86d09aeb934f7ffc434a794f200f996ac648a948fb747,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716202289824302946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716202283824088424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1abea911944eec26d1914472f2ae123f18b3bfbfaeb80b6d7349faa0ef068d9,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716202281824156776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716202275829793342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a133506868eb80a3921a605af4867a241ec37844a0d40c6d7929551f6036748,PodSandboxId:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716202274235812612,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970cae7e58e20a58e4a502d5224ff66dffa08257d45b657546df98ba24596cb4,PodSandboxId:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716202255127800238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:af1ea8e41b729b4bfd6dd51080f4c8595b5de450fe04bd402819d41665abb73e,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716202240934212888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2345cda026
db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a,PodSandboxId:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716202240773018227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1428c77c0501a4cf5bf9de480e8d9be63ee566dc02
8a33b588ee385a9969d09,PodSandboxId:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202240804193061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2,PodSandboxId:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716202240754819007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4,PodSandboxId:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716202240711622122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccc0ff58631642554d2c4dda79b9063f12efce048682c58a10f62aca1d1bdd6,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716202240689099592,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716202240592985727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52,PodSandboxId:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202235709951425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\
":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716201751816995551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernet
es.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548128249776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548090990616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716201546520879818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716201526502389608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1716201526517116832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35da285c-3f0a-47be-8d69-650cb6fb9c94 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.539299233Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3dd35111-9c73-4d57-a254-ea13a491ed34 name=/runtime.v1.RuntimeService/Version
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.539372885Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3dd35111-9c73-4d57-a254-ea13a491ed34 name=/runtime.v1.RuntimeService/Version
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.541293580Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee206e12-f463-407e-aabe-4c0ba1c374b9 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.541804103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202433541772177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee206e12-f463-407e-aabe-4c0ba1c374b9 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.545346228Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d4f61ec-055b-4614-b777-e91b38bb2f38 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.545430574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d4f61ec-055b-4614-b777-e91b38bb2f38 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.546008394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbd089079a46f63ff820ac71583c01e220f2c5761930e7999ca1b9b68d99553c,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716202329824326685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a408e291c1c58bf1b86d09aeb934f7ffc434a794f200f996ac648a948fb747,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716202289824302946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716202283824088424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1abea911944eec26d1914472f2ae123f18b3bfbfaeb80b6d7349faa0ef068d9,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716202281824156776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716202275829793342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a133506868eb80a3921a605af4867a241ec37844a0d40c6d7929551f6036748,PodSandboxId:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716202274235812612,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970cae7e58e20a58e4a502d5224ff66dffa08257d45b657546df98ba24596cb4,PodSandboxId:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716202255127800238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:af1ea8e41b729b4bfd6dd51080f4c8595b5de450fe04bd402819d41665abb73e,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716202240934212888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2345cda026
db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a,PodSandboxId:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716202240773018227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1428c77c0501a4cf5bf9de480e8d9be63ee566dc02
8a33b588ee385a9969d09,PodSandboxId:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202240804193061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2,PodSandboxId:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716202240754819007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4,PodSandboxId:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716202240711622122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccc0ff58631642554d2c4dda79b9063f12efce048682c58a10f62aca1d1bdd6,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716202240689099592,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716202240592985727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52,PodSandboxId:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202235709951425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\
":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716201751816995551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernet
es.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548128249776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548090990616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716201546520879818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716201526502389608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1716201526517116832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d4f61ec-055b-4614-b777-e91b38bb2f38 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.607720866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf70c1b0-b832-4d4f-a72b-44ad09690630 name=/runtime.v1.RuntimeService/Version
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.607802349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf70c1b0-b832-4d4f-a72b-44ad09690630 name=/runtime.v1.RuntimeService/Version
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.608942546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=671a0c41-ed92-40dc-a87e-0b6f94b36f63 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.609447685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202433609420893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=671a0c41-ed92-40dc-a87e-0b6f94b36f63 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.610044634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09f82fc8-928f-4c3d-b6cc-e90f3e78643c name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.610126420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09f82fc8-928f-4c3d-b6cc-e90f3e78643c name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.610553675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbd089079a46f63ff820ac71583c01e220f2c5761930e7999ca1b9b68d99553c,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716202329824326685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a408e291c1c58bf1b86d09aeb934f7ffc434a794f200f996ac648a948fb747,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716202289824302946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716202283824088424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1abea911944eec26d1914472f2ae123f18b3bfbfaeb80b6d7349faa0ef068d9,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716202281824156776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716202275829793342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a133506868eb80a3921a605af4867a241ec37844a0d40c6d7929551f6036748,PodSandboxId:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716202274235812612,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970cae7e58e20a58e4a502d5224ff66dffa08257d45b657546df98ba24596cb4,PodSandboxId:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716202255127800238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:af1ea8e41b729b4bfd6dd51080f4c8595b5de450fe04bd402819d41665abb73e,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716202240934212888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2345cda026
db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a,PodSandboxId:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716202240773018227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1428c77c0501a4cf5bf9de480e8d9be63ee566dc02
8a33b588ee385a9969d09,PodSandboxId:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202240804193061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2,PodSandboxId:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716202240754819007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4,PodSandboxId:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716202240711622122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccc0ff58631642554d2c4dda79b9063f12efce048682c58a10f62aca1d1bdd6,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716202240689099592,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716202240592985727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52,PodSandboxId:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202235709951425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\
":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716201751816995551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernet
es.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548128249776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548090990616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716201546520879818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716201526502389608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1716201526517116832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09f82fc8-928f-4c3d-b6cc-e90f3e78643c name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.655476557Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e21c6a98-352e-485f-a2ba-7bc623a498b3 name=/runtime.v1.RuntimeService/Version
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.655586420Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e21c6a98-352e-485f-a2ba-7bc623a498b3 name=/runtime.v1.RuntimeService/Version
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.657042163Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=46829ab6-af74-4783-b432-bb25a3a88f2e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.657459247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202433657436960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46829ab6-af74-4783-b432-bb25a3a88f2e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.657975300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe4ad064-8cfe-4f17-a42a-4b60af6e9e6e name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.658030248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe4ad064-8cfe-4f17-a42a-4b60af6e9e6e name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:53:53 ha-570850 crio[3788]: time="2024-05-20 10:53:53.658404381Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fbd089079a46f63ff820ac71583c01e220f2c5761930e7999ca1b9b68d99553c,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716202329824326685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a408e291c1c58bf1b86d09aeb934f7ffc434a794f200f996ac648a948fb747,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716202289824302946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716202283824088424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1abea911944eec26d1914472f2ae123f18b3bfbfaeb80b6d7349faa0ef068d9,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716202281824156776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716202275829793342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a133506868eb80a3921a605af4867a241ec37844a0d40c6d7929551f6036748,PodSandboxId:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716202274235812612,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970cae7e58e20a58e4a502d5224ff66dffa08257d45b657546df98ba24596cb4,PodSandboxId:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716202255127800238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:af1ea8e41b729b4bfd6dd51080f4c8595b5de450fe04bd402819d41665abb73e,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716202240934212888,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2345cda026
db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a,PodSandboxId:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716202240773018227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1428c77c0501a4cf5bf9de480e8d9be63ee566dc02
8a33b588ee385a9969d09,PodSandboxId:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202240804193061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2,PodSandboxId:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716202240754819007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4,PodSandboxId:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716202240711622122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ccc0ff58631642554d2c4dda79b9063f12efce048682c58a10f62aca1d1bdd6,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716202240689099592,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716202240592985727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52,PodSandboxId:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202235709951425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\
":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716201751816995551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernet
es.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548128249776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548090990616,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f999
37cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716201546520879818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f93
11987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716201526502389608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1716201526517116832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe4ad064-8cfe-4f17-a42a-4b60af6e9e6e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fbd089079a46f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   6a973dfea8b68       storage-provisioner
	44a408e291c1c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Running             kindnet-cni               3                   ccad4bbeb746b       kindnet-2zphj
	cbf4148192834       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Running             kube-controller-manager   2                   1d8c51dfd18bd       kube-controller-manager-ha-570850
	b1abea911944e       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Running             kube-apiserver            3                   eb560a5a0f57a       kube-apiserver-ha-570850
	2f0638b0a11ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   6a973dfea8b68       storage-provisioner
	4a133506868eb       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   7e79c547e58b0       busybox-fc5497c4f-q9r5v
	970cae7e58e20       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   e59330b11d714       kube-vip-ha-570850
	af1ea8e41b729       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago        Exited              kindnet-cni               2                   ccad4bbeb746b       kindnet-2zphj
	e1428c77c0501       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   f741cd63d2153       coredns-7db6d8ff4d-jw77g
	2345cda026db0       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      3 minutes ago        Running             kube-proxy                1                   760a55f5a3898       kube-proxy-dnhm6
	8478dc10333b9       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      3 minutes ago        Running             kube-scheduler            1                   e837ff20e95ab       kube-scheduler-ha-570850
	cb5165edef082       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago        Running             etcd                      1                   880da8bf8800b       etcd-ha-570850
	0ccc0ff586316       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      3 minutes ago        Exited              kube-apiserver            2                   eb560a5a0f57a       kube-apiserver-ha-570850
	77bb623d31463       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      3 minutes ago        Exited              kube-controller-manager   1                   1d8c51dfd18bd       kube-controller-manager-ha-570850
	0ce23a0de515f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   16a5e205296fb       coredns-7db6d8ff4d-xr965
	7ff013665ebf9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   1b009be6473f8       busybox-fc5497c4f-q9r5v
	90d27af32d37b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   7f39567bf239f       coredns-7db6d8ff4d-jw77g
	db09b659f2af2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   5862a2beb755c       coredns-7db6d8ff4d-xr965
	2ab644050dd47       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      14 minutes ago       Exited              kube-proxy                0                   17d9c116389c0       kube-proxy-dnhm6
	3ed532de1dd4e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago       Exited              etcd                      0                   6e5a2b4300b4d       etcd-ha-570850
	5f81e882715f6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      15 minutes ago       Exited              kube-scheduler            0                   760d0e6343aaf       kube-scheduler-ha-570850
	
	
	==> coredns [0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52] <==
	[INFO] plugin/kubernetes: Trace[661069811]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:50:43.854) (total time: 10001ms):
	Trace[661069811]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:50:53.856)
	Trace[661069811]: [10.001577462s] [10.001577462s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1098691614]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:50:48.010) (total time: 10001ms):
	Trace[1098691614]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:50:58.012)
	Trace[1098691614]: [10.001866316s] [10.001866316s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45586->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:45586->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d] <==
	[INFO] 10.244.1.2:48903 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120367s
	[INFO] 10.244.1.2:57868 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001671078s
	[INFO] 10.244.1.2:56995 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082609s
	[INFO] 10.244.2.2:58942 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706357s
	[INFO] 10.244.2.2:35126 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090874s
	[INFO] 10.244.2.2:37909 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008474s
	[INFO] 10.244.2.2:51462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166312s
	[INFO] 10.244.2.2:33520 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093626s
	[INFO] 10.244.0.4:59833 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088933s
	[INFO] 10.244.1.2:59628 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093196s
	[INFO] 10.244.1.2:50071 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124886s
	[INFO] 10.244.1.2:42327 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143875s
	[INFO] 10.244.0.4:35557 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115628s
	[INFO] 10.244.0.4:36441 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148453s
	[INFO] 10.244.0.4:52951 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110826s
	[INFO] 10.244.0.4:41170 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075418s
	[INFO] 10.244.1.2:54236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124156s
	[INFO] 10.244.1.2:55716 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008044s
	[INFO] 10.244.1.2:51672 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128329s
	[INFO] 10.244.2.2:44501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195618s
	[INFO] 10.244.2.2:48083 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105613s
	[INFO] 10.244.2.2:41793 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131302s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1920&timeout=5m33s&timeoutSeconds=333&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c] <==
	[INFO] 10.244.2.2:57122 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001355331s
	[INFO] 10.244.0.4:59866 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156011s
	[INFO] 10.244.0.4:56885 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009797641s
	[INFO] 10.244.0.4:37828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138434s
	[INFO] 10.244.1.2:59957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100136s
	[INFO] 10.244.1.2:41914 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180125s
	[INFO] 10.244.1.2:38547 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077887s
	[INFO] 10.244.1.2:35980 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172936s
	[INFO] 10.244.2.2:50204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019701s
	[INFO] 10.244.2.2:42765 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076194s
	[INFO] 10.244.2.2:44033 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001282105s
	[INFO] 10.244.0.4:49359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121025s
	[INFO] 10.244.0.4:41717 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077487s
	[INFO] 10.244.0.4:44192 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062315s
	[INFO] 10.244.1.2:40081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176555s
	[INFO] 10.244.2.2:58201 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150016s
	[INFO] 10.244.2.2:56308 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145342s
	[INFO] 10.244.2.2:36926 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093685s
	[INFO] 10.244.2.2:42968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009681s
	[INFO] 10.244.1.2:34643 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154857s
	[INFO] 10.244.2.2:49012 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281845s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1951&timeout=8m37s&timeoutSeconds=517&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1951&timeout=6m43s&timeoutSeconds=403&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [e1428c77c0501a4cf5bf9de480e8d9be63ee566dc028a33b588ee385a9969d09] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[350754239]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:50:44.690) (total time: 10001ms):
	Trace[350754239]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:50:54.691)
	Trace[350754239]: [10.001535123s] [10.001535123s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43826->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43826->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44156->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44156->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43832->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43832->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-570850
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T10_38_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:38:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:53:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:51:23 +0000   Mon, 20 May 2024 10:38:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:51:23 +0000   Mon, 20 May 2024 10:38:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:51:23 +0000   Mon, 20 May 2024 10:38:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:51:23 +0000   Mon, 20 May 2024 10:39:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    ha-570850
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 328136f6dc1d4de39599ea377c28404e
	  System UUID:                328136f6-dc1d-4de3-9599-ea377c28404e
	  Boot ID:                    bc8aa620-b931-4636-9be0-cbe2c643d897
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q9r5v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-jw77g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-xr965             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-570850                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-2zphj                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-570850             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-570850    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-dnhm6                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-570850             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-570850                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 14m    kube-proxy       
	  Normal   Starting                 2m30s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    15m    kubelet          Node ha-570850 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m    kubelet          Node ha-570850 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m    kubelet          Node ha-570850 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m    node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Normal   NodeReady                14m    kubelet          Node ha-570850 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Normal   RegisteredNode           11m    node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Warning  ContainerGCFailed        4m2s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m26s  node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Normal   RegisteredNode           2m18s  node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	  Normal   RegisteredNode           43s    node-controller  Node ha-570850 event: Registered Node ha-570850 in Controller
	
	
	Name:               ha-570850-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T10_41_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:40:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:53:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:52:05 +0000   Mon, 20 May 2024 10:51:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:52:05 +0000   Mon, 20 May 2024 10:51:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:52:05 +0000   Mon, 20 May 2024 10:51:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:52:05 +0000   Mon, 20 May 2024 10:51:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    ha-570850-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 679a56af7395441fb77f57941486fb9b
	  System UUID:                679a56af-7395-441f-b77f-57941486fb9b
	  Boot ID:                    752295ac-6834-4f74-b2f2-eb38f1212afe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wpv4q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-570850-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-f5x4s                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-570850-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-570850-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-6s5x7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-570850-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-570850-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m7s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-570850-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-570850-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-570850-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  NodeNotReady             9m39s                  node-controller  Node ha-570850-m02 status is now: NodeNotReady
	  Normal  Starting                 2m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m58s (x8 over 2m58s)  kubelet          Node ha-570850-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s (x8 over 2m58s)  kubelet          Node ha-570850-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s (x7 over 2m58s)  kubelet          Node ha-570850-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m26s                  node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  RegisteredNode           2m18s                  node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	  Normal  RegisteredNode           43s                    node-controller  Node ha-570850-m02 event: Registered Node ha-570850-m02 in Controller
	
	
	Name:               ha-570850-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-570850-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=ha-570850
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T10_43_09_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:43:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-570850-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:46:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 10:43:39 +0000   Mon, 20 May 2024 10:52:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    ha-570850-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 14f2f8926106479fbb09b362a7c92c83
	  System UUID:                14f2f892-6106-479f-bb09-b362a7c92c83
	  Boot ID:                    2b747312-6091-4a44-bf3d-81557c80cae1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-p9s2j       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-xt8ht    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-570850-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-570850-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-570850-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           10m                node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-570850-m04 status is now: NodeReady
	  Normal  RegisteredNode           2m26s              node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  RegisteredNode           2m18s              node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	  Normal  NodeNotReady             106s               node-controller  Node ha-570850-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           43s                node-controller  Node ha-570850-m04 event: Registered Node ha-570850-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.905313] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.066522] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063920] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.182694] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.123506] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.262222] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.187463] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +4.199064] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.059504] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.313134] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.088050] kauditd_printk_skb: 79 callbacks suppressed
	[May20 10:39] kauditd_printk_skb: 21 callbacks suppressed
	[May20 10:41] kauditd_printk_skb: 72 callbacks suppressed
	[May20 10:50] systemd-fstab-generator[3707]: Ignoring "noauto" option for root device
	[  +0.137290] systemd-fstab-generator[3719]: Ignoring "noauto" option for root device
	[  +0.175518] systemd-fstab-generator[3733]: Ignoring "noauto" option for root device
	[  +0.150488] systemd-fstab-generator[3745]: Ignoring "noauto" option for root device
	[  +0.300141] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +1.258769] systemd-fstab-generator[3874]: Ignoring "noauto" option for root device
	[  +5.092805] kauditd_printk_skb: 132 callbacks suppressed
	[ +12.503054] kauditd_printk_skb: 76 callbacks suppressed
	[  +7.055284] kauditd_printk_skb: 2 callbacks suppressed
	[May20 10:51] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.452931] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d] <==
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-20T10:49:01.787883Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.152:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T10:49:01.787942Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.152:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T10:49:01.788008Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"900c4b71f7b778f3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-20T10:49:01.788218Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.78829Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.788361Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.788449Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.788523Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.788598Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.78872Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.78875Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.788786Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.78884Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.788972Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.789046Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.789118Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.789147Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.792404Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.152:2380"}
	{"level":"info","ts":"2024-05-20T10:49:01.792487Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.152:2380"}
	{"level":"info","ts":"2024-05-20T10:49:01.792512Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-570850","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.152:2380"],"advertise-client-urls":["https://192.168.39.152:2379"]}
	
	
	==> etcd [cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4] <==
	{"level":"info","ts":"2024-05-20T10:52:51.382892Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"900c4b71f7b778f3","to":"8a9e64255fae650f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-05-20T10:52:51.38308Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:52:51.414965Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"900c4b71f7b778f3","to":"8a9e64255fae650f","stream-type":"stream Message"}
	{"level":"info","ts":"2024-05-20T10:52:51.415106Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"warn","ts":"2024-05-20T10:52:51.485999Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8a9e64255fae650f","rtt":"0s","error":"dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-20T10:52:51.486322Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8a9e64255fae650f","rtt":"0s","error":"dial tcp 192.168.39.240:2380: connect: connection refused"}
	{"level":"info","ts":"2024-05-20T10:52:55.661887Z","caller":"traceutil/trace.go:171","msg":"trace[1891030076] transaction","detail":"{read_only:false; response_revision:2515; number_of_response:1; }","duration":"116.814274ms","start":"2024-05-20T10:52:55.545034Z","end":"2024-05-20T10:52:55.661848Z","steps":["trace[1891030076] 'process raft request'  (duration: 116.46909ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T10:53:39.777518Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.240:49966","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-05-20T10:53:39.793378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 switched to configuration voters=(10379754194041534707 14408768591081558336)"}
	{"level":"info","ts":"2024-05-20T10:53:39.795909Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"ce072c4559d5992c","local-member-id":"900c4b71f7b778f3","removed-remote-peer-id":"8a9e64255fae650f","removed-remote-peer-urls":["https://192.168.39.240:2380"]}
	{"level":"info","ts":"2024-05-20T10:53:39.796124Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8a9e64255fae650f"}
	{"level":"warn","ts":"2024-05-20T10:53:39.796362Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:53:39.796433Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a9e64255fae650f"}
	{"level":"warn","ts":"2024-05-20T10:53:39.796789Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:53:39.796859Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:53:39.796997Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"warn","ts":"2024-05-20T10:53:39.797355Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f","error":"context canceled"}
	{"level":"warn","ts":"2024-05-20T10:53:39.797447Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"8a9e64255fae650f","error":"failed to read 8a9e64255fae650f on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-05-20T10:53:39.797516Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"warn","ts":"2024-05-20T10:53:39.79784Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f","error":"context canceled"}
	{"level":"info","ts":"2024-05-20T10:53:39.797939Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:53:39.798013Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:53:39.798078Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"900c4b71f7b778f3","removed-remote-peer-id":"8a9e64255fae650f"}
	{"level":"warn","ts":"2024-05-20T10:53:39.816343Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"900c4b71f7b778f3","remote-peer-id-stream-handler":"900c4b71f7b778f3","remote-peer-id-from":"8a9e64255fae650f"}
	{"level":"warn","ts":"2024-05-20T10:53:39.820495Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"900c4b71f7b778f3","remote-peer-id-stream-handler":"900c4b71f7b778f3","remote-peer-id-from":"8a9e64255fae650f"}
	
	
	==> kernel <==
	 10:53:54 up 15 min,  0 users,  load average: 0.24, 0.51, 0.34
	Linux ha-570850 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [44a408e291c1c58bf1b86d09aeb934f7ffc434a794f200f996ac648a948fb747] <==
	I0520 10:53:20.916806       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	I0520 10:53:30.961237       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0520 10:53:30.961278       1 main.go:227] handling current node
	I0520 10:53:30.961289       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0520 10:53:30.961295       1 main.go:250] Node ha-570850-m02 has CIDR [10.244.1.0/24] 
	I0520 10:53:30.961436       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0520 10:53:30.961467       1 main.go:250] Node ha-570850-m03 has CIDR [10.244.2.0/24] 
	I0520 10:53:30.961580       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0520 10:53:30.961613       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	I0520 10:53:40.967903       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0520 10:53:40.967947       1 main.go:227] handling current node
	I0520 10:53:40.967958       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0520 10:53:40.967964       1 main.go:250] Node ha-570850-m02 has CIDR [10.244.1.0/24] 
	I0520 10:53:40.968145       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0520 10:53:40.968177       1 main.go:250] Node ha-570850-m03 has CIDR [10.244.2.0/24] 
	I0520 10:53:40.968227       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0520 10:53:40.968250       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	I0520 10:53:50.984794       1 main.go:223] Handling node with IPs: map[192.168.39.152:{}]
	I0520 10:53:50.984836       1 main.go:227] handling current node
	I0520 10:53:50.984903       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0520 10:53:50.984910       1 main.go:250] Node ha-570850-m02 has CIDR [10.244.1.0/24] 
	I0520 10:53:50.985127       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0520 10:53:50.985154       1 main.go:250] Node ha-570850-m03 has CIDR [10.244.2.0/24] 
	I0520 10:53:50.985268       1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
	I0520 10:53:50.985351       1 main.go:250] Node ha-570850-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [af1ea8e41b729b4bfd6dd51080f4c8595b5de450fe04bd402819d41665abb73e] <==
	I0520 10:50:41.426803       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0520 10:50:43.021158       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 10:50:46.093379       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 10:50:49.165158       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 10:50:52.237070       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 10:50:55.309122       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [0ccc0ff58631642554d2c4dda79b9063f12efce048682c58a10f62aca1d1bdd6] <==
	I0520 10:50:41.382599       1 options.go:221] external host was not specified, using 192.168.39.152
	I0520 10:50:41.384951       1 server.go:148] Version: v1.30.1
	I0520 10:50:41.385034       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:50:42.086743       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0520 10:50:42.087720       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0520 10:50:42.087801       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0520 10:50:42.087964       1 instance.go:299] Using reconciler: lease
	I0520 10:50:42.088406       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0520 10:51:02.081471       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0520 10:51:02.082836       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0520 10:51:02.089260       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b1abea911944eec26d1914472f2ae123f18b3bfbfaeb80b6d7349faa0ef068d9] <==
	I0520 10:51:23.746988       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0520 10:51:23.747005       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0520 10:51:23.747017       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0520 10:51:23.807071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 10:51:23.807228       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 10:51:23.807314       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 10:51:23.807320       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 10:51:23.807429       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 10:51:23.808318       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 10:51:23.809916       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 10:51:23.809998       1 aggregator.go:165] initial CRD sync complete...
	I0520 10:51:23.810045       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 10:51:23.810068       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 10:51:23.810075       1 cache.go:39] Caches are synced for autoregister controller
	I0520 10:51:23.814335       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 10:51:23.832811       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 10:51:23.834572       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 10:51:23.834621       1 policy_source.go:224] refreshing policies
	W0520 10:51:23.839203       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.240]
	I0520 10:51:23.841785       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 10:51:23.862023       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0520 10:51:23.869480       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0520 10:51:23.877096       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 10:51:24.714740       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0520 10:51:25.094443       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.111 192.168.39.152]
	
	
	==> kube-controller-manager [77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc] <==
	I0520 10:50:41.928307       1 serving.go:380] Generated self-signed cert in-memory
	I0520 10:50:42.230214       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0520 10:50:42.230314       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:50:42.231880       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0520 10:50:42.232770       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 10:50:42.232926       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 10:50:42.233019       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0520 10:51:03.094351       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.152:8443/healthz\": dial tcp 192.168.39.152:8443: connect: connection refused"
	
	
	==> kube-controller-manager [cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9] <==
	I0520 10:51:43.734248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="598.132µs"
	I0520 10:51:46.026770       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-7rrwn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-7rrwn\": the object has been modified; please apply your changes to the latest version and try again"
	I0520 10:51:46.027026       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"1274fdc6-2bfb-4a11-be9a-6c47b12678e1", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-7rrwn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-7rrwn": the object has been modified; please apply your changes to the latest version and try again
	I0520 10:51:46.046520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.773336ms"
	I0520 10:51:46.046887       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="195.807µs"
	I0520 10:51:49.057445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.097677ms"
	I0520 10:51:49.058101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.619µs"
	I0520 10:52:07.953022       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-7rrwn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-7rrwn\": the object has been modified; please apply your changes to the latest version and try again"
	I0520 10:52:07.956240       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"1274fdc6-2bfb-4a11-be9a-6c47b12678e1", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-7rrwn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-7rrwn": the object has been modified; please apply your changes to the latest version and try again
	I0520 10:52:07.978032       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.7686ms"
	I0520 10:52:07.978246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="96.656µs"
	I0520 10:52:08.858380       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-570850-m04"
	I0520 10:52:09.058775       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.886522ms"
	I0520 10:52:09.059624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.642µs"
	I0520 10:52:42.696585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.709µs"
	I0520 10:53:02.043031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.675188ms"
	I0520 10:53:02.043311       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.865µs"
	I0520 10:53:36.480472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.050093ms"
	I0520 10:53:36.575413       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.291783ms"
	I0520 10:53:36.631971       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.286383ms"
	I0520 10:53:36.632578       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="286.155µs"
	I0520 10:53:38.637336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.71µs"
	I0520 10:53:39.215734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="144.42µs"
	I0520 10:53:39.237385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.597µs"
	I0520 10:53:39.246801       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.785µs"
	
	
	==> kube-proxy [2345cda026db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a] <==
	I0520 10:50:42.252548       1 server_linux.go:69] "Using iptables proxy"
	E0520 10:50:42.702920       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 10:50:45.773173       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 10:50:48.845302       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 10:50:54.990510       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 10:51:07.278218       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0520 10:51:23.652078       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.152"]
	I0520 10:51:23.792928       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 10:51:23.793026       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 10:51:23.793049       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:51:23.795600       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:51:23.796001       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:51:23.796042       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:51:23.797889       1 config.go:192] "Starting service config controller"
	I0520 10:51:23.797947       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:51:23.797983       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:51:23.798003       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:51:23.798826       1 config.go:319] "Starting node config controller"
	I0520 10:51:23.798873       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:51:23.898722       1 shared_informer.go:320] Caches are synced for service config
	I0520 10:51:23.898763       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 10:51:23.899076       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0] <==
	E0520 10:47:56.817798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:47:59.886237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:47:59.886296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:47:59.886743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:47:59.886788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:02.958465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:02.958522       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:06.030750       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:06.030910       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:06.031048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:06.031088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:09.102907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:09.103139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:18.318620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:18.318779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:18.318918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:18.318956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:18.318989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:18.319040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:33.678915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:33.678996       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:39.822265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:39.822374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:42.893857       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:42.894425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6] <==
	W0520 10:48:57.725415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 10:48:57.725683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 10:48:57.740415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 10:48:57.740553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 10:48:57.787073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:48:57.787239       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 10:48:57.832834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 10:48:57.832960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 10:48:58.121359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 10:48:58.121848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 10:48:58.122808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:48:58.122897       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:48:58.211914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:48:58.212150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:48:58.356603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:48:58.356831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 10:48:58.487408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 10:48:58.487464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 10:49:01.242093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 10:49:01.242122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 10:49:01.323331       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 10:49:01.323366       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 10:49:01.451021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:49:01.451066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:49:01.692998       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2] <==
	E0520 10:51:18.574601       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.152:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:19.115455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.152:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:19.115536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.152:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:19.578422       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.152:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:19.578548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.152:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:19.609240       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.152:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:19.609313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.152:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:19.627241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.152:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:19.627303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.152:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:20.354183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.152:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:20.354245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.152:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:20.959906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.152:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:20.959963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.152:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:21.539782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.152:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:51:21.539834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.152:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:51:23.747175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:51:23.747232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:51:23.747319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 10:51:23.747353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 10:51:23.747400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:51:23.747438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 10:51:23.755534       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 10:51:23.755585       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0520 10:51:44.909303       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 10:53:36.531367       1 schedule_one.go:1100] "Error updating pod" err="pods \"busybox-fc5497c4f-469cb\" not found" pod="default/busybox-fc5497c4f-469cb"
	
	
	==> kubelet <==
	May 20 10:51:29 ha-570850 kubelet[1371]: I0520 10:51:29.814080    1371 scope.go:117] "RemoveContainer" containerID="2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35"
	May 20 10:51:29 ha-570850 kubelet[1371]: E0520 10:51:29.814264    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8cbdeaca-631d-449f-bb63-9ee4ec3f7a36)\"" pod="kube-system/storage-provisioner" podUID="8cbdeaca-631d-449f-bb63-9ee4ec3f7a36"
	May 20 10:51:29 ha-570850 kubelet[1371]: I0520 10:51:29.814497    1371 scope.go:117] "RemoveContainer" containerID="af1ea8e41b729b4bfd6dd51080f4c8595b5de450fe04bd402819d41665abb73e"
	May 20 10:51:43 ha-570850 kubelet[1371]: I0520 10:51:43.814117    1371 scope.go:117] "RemoveContainer" containerID="2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35"
	May 20 10:51:43 ha-570850 kubelet[1371]: E0520 10:51:43.814432    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8cbdeaca-631d-449f-bb63-9ee4ec3f7a36)\"" pod="kube-system/storage-provisioner" podUID="8cbdeaca-631d-449f-bb63-9ee4ec3f7a36"
	May 20 10:51:52 ha-570850 kubelet[1371]: E0520 10:51:52.854536    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:51:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:51:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:51:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:51:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:51:54 ha-570850 kubelet[1371]: I0520 10:51:54.813765    1371 scope.go:117] "RemoveContainer" containerID="2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35"
	May 20 10:51:54 ha-570850 kubelet[1371]: E0520 10:51:54.815619    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8cbdeaca-631d-449f-bb63-9ee4ec3f7a36)\"" pod="kube-system/storage-provisioner" podUID="8cbdeaca-631d-449f-bb63-9ee4ec3f7a36"
	May 20 10:52:09 ha-570850 kubelet[1371]: I0520 10:52:09.813279    1371 scope.go:117] "RemoveContainer" containerID="2f0638b0a11ad59bb370048b750e6aa995e391982efdc9ab796ed79c11ac3b35"
	May 20 10:52:20 ha-570850 kubelet[1371]: I0520 10:52:20.814119    1371 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-570850" podUID="f230fe6d-0d6e-4a3b-bbf8-694d256f1fde"
	May 20 10:52:20 ha-570850 kubelet[1371]: I0520 10:52:20.833479    1371 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-570850"
	May 20 10:52:52 ha-570850 kubelet[1371]: E0520 10:52:52.856827    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:52:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:52:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:52:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:52:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 10:53:52 ha-570850 kubelet[1371]: E0520 10:53:52.853968    1371 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 10:53:52 ha-570850 kubelet[1371]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 10:53:52 ha-570850 kubelet[1371]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 10:53:52 ha-570850 kubelet[1371]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 10:53:52 ha-570850 kubelet[1371]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 10:53:53.164071   30294 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18925-3819/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-570850 -n ha-570850
helpers_test.go:261: (dbg) Run:  kubectl --context ha-570850 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-mhqgb
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-570850 describe pod busybox-fc5497c4f-mhqgb
helpers_test.go:282: (dbg) kubectl --context ha-570850 describe pod busybox-fc5497c4f-mhqgb:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-mhqgb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-znggn (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-znggn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  17s (x2 over 19s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  17s (x2 over 19s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  17s (x2 over 19s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (19.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (172.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 stop -v=7 --alsologtostderr
E0520 10:54:54.309854   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 stop -v=7 --alsologtostderr: exit status 82 (2m1.702225768s)

                                                
                                                
-- stdout --
	* Stopping node "ha-570850-m04"  ...
	* Stopping node "ha-570850-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:53:55.591526   30412 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:53:55.591790   30412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:53:55.591800   30412 out.go:304] Setting ErrFile to fd 2...
	I0520 10:53:55.591804   30412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:53:55.592019   30412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:53:55.592292   30412 out.go:298] Setting JSON to false
	I0520 10:53:55.592388   30412 mustload.go:65] Loading cluster: ha-570850
	I0520 10:53:55.592756   30412 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:53:55.592854   30412 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:53:55.593088   30412 mustload.go:65] Loading cluster: ha-570850
	I0520 10:53:55.593275   30412 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:53:55.593318   30412 stop.go:39] StopHost: ha-570850-m04
	I0520 10:53:55.593707   30412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:53:55.593769   30412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:53:55.608122   30412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
	I0520 10:53:55.608572   30412 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:53:55.609139   30412 main.go:141] libmachine: Using API Version  1
	I0520 10:53:55.609183   30412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:53:55.609569   30412 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:53:55.611888   30412 out.go:177] * Stopping node "ha-570850-m04"  ...
	I0520 10:53:55.613481   30412 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 10:53:55.613503   30412 main.go:141] libmachine: (ha-570850-m04) Calling .DriverName
	I0520 10:53:55.613734   30412 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 10:53:55.613761   30412 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHHostname
	I0520 10:53:55.616324   30412 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:53:55.616763   30412 main.go:141] libmachine: (ha-570850-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:30:02", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:53:25 +0000 UTC Type:0 Mac:52:54:00:9d:30:02 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:ha-570850-m04 Clientid:01:52:54:00:9d:30:02}
	I0520 10:53:55.616799   30412 main.go:141] libmachine: (ha-570850-m04) DBG | domain ha-570850-m04 has defined IP address 192.168.39.87 and MAC address 52:54:00:9d:30:02 in network mk-ha-570850
	I0520 10:53:55.616964   30412 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHPort
	I0520 10:53:55.617125   30412 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHKeyPath
	I0520 10:53:55.617241   30412 main.go:141] libmachine: (ha-570850-m04) Calling .GetSSHUsername
	I0520 10:53:55.617381   30412 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m04/id_rsa Username:docker}
	I0520 10:53:55.696161   30412 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 10:53:55.749931   30412 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	W0520 10:53:55.802879   30412 stop.go:55] failed to complete vm config backup (will continue): [failed to copy "/etc/kubernetes" to "/var/lib/minikube/backup" (will continue): sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup: Process exited with status 23
	stdout:
	
	stderr:
	rsync: [sender] link_stat "/etc/kubernetes" failed: No such file or directory (2)
	rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1336) [sender=3.2.7]
	]
	I0520 10:53:55.802918   30412 main.go:141] libmachine: Stopping "ha-570850-m04"...
	I0520 10:53:55.802934   30412 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:53:55.804525   30412 main.go:141] libmachine: (ha-570850-m04) Calling .Stop
	I0520 10:53:55.807844   30412 main.go:141] libmachine: (ha-570850-m04) Waiting for machine to stop 0/120
	I0520 10:53:56.845826   30412 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:53:56.847114   30412 main.go:141] libmachine: Machine "ha-570850-m04" was stopped.
	I0520 10:53:56.847131   30412 stop.go:75] duration metric: took 1.233650352s to stop
	I0520 10:53:56.847180   30412 stop.go:39] StopHost: ha-570850-m02
	I0520 10:53:56.847464   30412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:53:56.847537   30412 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:53:56.862161   30412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40513
	I0520 10:53:56.862563   30412 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:53:56.863044   30412 main.go:141] libmachine: Using API Version  1
	I0520 10:53:56.863069   30412 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:53:56.863336   30412 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:53:56.865635   30412 out.go:177] * Stopping node "ha-570850-m02"  ...
	I0520 10:53:56.866959   30412 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 10:53:56.866980   30412 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:53:56.867205   30412 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 10:53:56.867228   30412 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:53:56.869802   30412 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:53:56.870151   30412 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:50:46 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:53:56.870178   30412 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:53:56.870321   30412 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:53:56.870476   30412 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:53:56.870627   30412 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:53:56.870753   30412 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	I0520 10:53:56.951724   30412 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 10:53:57.007514   30412 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 10:53:57.062699   30412 main.go:141] libmachine: Stopping "ha-570850-m02"...
	I0520 10:53:57.062728   30412 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:53:57.064640   30412 main.go:141] libmachine: (ha-570850-m02) Calling .Stop
	I0520 10:53:57.067865   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 0/120
	I0520 10:53:58.069137   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 1/120
	I0520 10:53:59.070587   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 2/120
	I0520 10:54:00.072694   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 3/120
	I0520 10:54:01.073879   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 4/120
	I0520 10:54:02.075336   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 5/120
	I0520 10:54:03.076871   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 6/120
	I0520 10:54:04.078312   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 7/120
	I0520 10:54:05.080304   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 8/120
	I0520 10:54:06.081662   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 9/120
	I0520 10:54:07.083532   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 10/120
	I0520 10:54:08.084935   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 11/120
	I0520 10:54:09.086551   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 12/120
	I0520 10:54:10.087741   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 13/120
	I0520 10:54:11.089047   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 14/120
	I0520 10:54:12.090819   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 15/120
	I0520 10:54:13.092261   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 16/120
	I0520 10:54:14.093796   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 17/120
	I0520 10:54:15.095496   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 18/120
	I0520 10:54:16.096988   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 19/120
	I0520 10:54:17.098942   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 20/120
	I0520 10:54:18.100379   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 21/120
	I0520 10:54:19.101691   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 22/120
	I0520 10:54:20.103051   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 23/120
	I0520 10:54:21.104679   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 24/120
	I0520 10:54:22.106483   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 25/120
	I0520 10:54:23.107791   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 26/120
	I0520 10:54:24.109416   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 27/120
	I0520 10:54:25.111313   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 28/120
	I0520 10:54:26.112512   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 29/120
	I0520 10:54:27.114069   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 30/120
	I0520 10:54:28.115559   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 31/120
	I0520 10:54:29.116776   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 32/120
	I0520 10:54:30.118344   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 33/120
	I0520 10:54:31.119708   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 34/120
	I0520 10:54:32.121408   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 35/120
	I0520 10:54:33.122718   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 36/120
	I0520 10:54:34.123921   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 37/120
	I0520 10:54:35.125274   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 38/120
	I0520 10:54:36.126437   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 39/120
	I0520 10:54:37.127990   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 40/120
	I0520 10:54:38.129473   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 41/120
	I0520 10:54:39.130661   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 42/120
	I0520 10:54:40.132167   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 43/120
	I0520 10:54:41.133451   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 44/120
	I0520 10:54:42.135062   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 45/120
	I0520 10:54:43.136540   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 46/120
	I0520 10:54:44.137818   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 47/120
	I0520 10:54:45.139226   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 48/120
	I0520 10:54:46.140542   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 49/120
	I0520 10:54:47.142148   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 50/120
	I0520 10:54:48.143384   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 51/120
	I0520 10:54:49.144530   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 52/120
	I0520 10:54:50.146307   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 53/120
	I0520 10:54:51.147486   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 54/120
	I0520 10:54:52.148639   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 55/120
	I0520 10:54:53.150349   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 56/120
	I0520 10:54:54.151452   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 57/120
	I0520 10:54:55.152843   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 58/120
	I0520 10:54:56.154127   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 59/120
	I0520 10:54:57.155636   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 60/120
	I0520 10:54:58.156977   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 61/120
	I0520 10:54:59.158130   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 62/120
	I0520 10:55:00.159340   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 63/120
	I0520 10:55:01.160922   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 64/120
	I0520 10:55:02.162931   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 65/120
	I0520 10:55:03.164376   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 66/120
	I0520 10:55:04.165914   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 67/120
	I0520 10:55:05.167094   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 68/120
	I0520 10:55:06.168391   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 69/120
	I0520 10:55:07.169939   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 70/120
	I0520 10:55:08.171261   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 71/120
	I0520 10:55:09.172486   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 72/120
	I0520 10:55:10.173964   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 73/120
	I0520 10:55:11.175130   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 74/120
	I0520 10:55:12.176714   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 75/120
	I0520 10:55:13.178182   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 76/120
	I0520 10:55:14.179274   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 77/120
	I0520 10:55:15.180584   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 78/120
	I0520 10:55:16.182048   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 79/120
	I0520 10:55:17.183762   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 80/120
	I0520 10:55:18.185280   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 81/120
	I0520 10:55:19.186609   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 82/120
	I0520 10:55:20.187903   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 83/120
	I0520 10:55:21.189158   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 84/120
	I0520 10:55:22.190947   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 85/120
	I0520 10:55:23.192368   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 86/120
	I0520 10:55:24.193850   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 87/120
	I0520 10:55:25.195177   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 88/120
	I0520 10:55:26.196697   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 89/120
	I0520 10:55:27.198575   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 90/120
	I0520 10:55:28.200002   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 91/120
	I0520 10:55:29.201518   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 92/120
	I0520 10:55:30.203324   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 93/120
	I0520 10:55:31.204835   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 94/120
	I0520 10:55:32.206695   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 95/120
	I0520 10:55:33.208070   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 96/120
	I0520 10:55:34.209723   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 97/120
	I0520 10:55:35.211346   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 98/120
	I0520 10:55:36.212803   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 99/120
	I0520 10:55:37.215034   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 100/120
	I0520 10:55:38.216409   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 101/120
	I0520 10:55:39.217896   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 102/120
	I0520 10:55:40.219699   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 103/120
	I0520 10:55:41.221489   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 104/120
	I0520 10:55:42.223091   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 105/120
	I0520 10:55:43.224350   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 106/120
	I0520 10:55:44.225741   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 107/120
	I0520 10:55:45.227168   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 108/120
	I0520 10:55:46.228528   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 109/120
	I0520 10:55:47.229972   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 110/120
	I0520 10:55:48.231972   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 111/120
	I0520 10:55:49.233226   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 112/120
	I0520 10:55:50.234614   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 113/120
	I0520 10:55:51.235900   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 114/120
	I0520 10:55:52.237760   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 115/120
	I0520 10:55:53.239422   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 116/120
	I0520 10:55:54.240823   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 117/120
	I0520 10:55:55.242347   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 118/120
	I0520 10:55:56.243887   30412 main.go:141] libmachine: (ha-570850-m02) Waiting for machine to stop 119/120
	I0520 10:55:57.245413   30412 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0520 10:55:57.245475   30412 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0520 10:55:57.247433   30412 out.go:177] 
	W0520 10:55:57.249174   30412 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0520 10:55:57.249197   30412 out.go:239] * 
	* 
	W0520 10:55:57.251671   30412 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 10:55:57.253085   30412 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-570850 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr: exit status 7 (33.497269196s)

                                                
                                                
-- stdout --
	ha-570850
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-570850-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-570850-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:55:57.296820   30902 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:55:57.296968   30902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:55:57.296982   30902 out.go:304] Setting ErrFile to fd 2...
	I0520 10:55:57.296987   30902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:55:57.297198   30902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:55:57.297365   30902 out.go:298] Setting JSON to false
	I0520 10:55:57.297388   30902 mustload.go:65] Loading cluster: ha-570850
	I0520 10:55:57.297424   30902 notify.go:220] Checking for updates...
	I0520 10:55:57.297727   30902 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:55:57.297740   30902 status.go:255] checking status of ha-570850 ...
	I0520 10:55:57.298157   30902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:55:57.298228   30902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:55:57.318109   30902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37429
	I0520 10:55:57.318461   30902 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:55:57.319127   30902 main.go:141] libmachine: Using API Version  1
	I0520 10:55:57.319156   30902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:55:57.319498   30902 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:55:57.319702   30902 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:55:57.321484   30902 status.go:330] ha-570850 host status = "Running" (err=<nil>)
	I0520 10:55:57.321502   30902 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:55:57.321769   30902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:55:57.321802   30902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:55:57.336200   30902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I0520 10:55:57.336608   30902 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:55:57.337077   30902 main.go:141] libmachine: Using API Version  1
	I0520 10:55:57.337097   30902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:55:57.337381   30902 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:55:57.337553   30902 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:55:57.340078   30902 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:55:57.340467   30902 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:55:57.340486   30902 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:55:57.340614   30902 host.go:66] Checking if "ha-570850" exists ...
	I0520 10:55:57.340899   30902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:55:57.340964   30902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:55:57.354779   30902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35383
	I0520 10:55:57.355155   30902 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:55:57.355558   30902 main.go:141] libmachine: Using API Version  1
	I0520 10:55:57.355577   30902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:55:57.355858   30902 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:55:57.356026   30902 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:55:57.356204   30902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:55:57.356240   30902 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:55:57.358479   30902 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:55:57.358942   30902 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:55:57.358968   30902 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:55:57.359078   30902 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:55:57.359240   30902 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:55:57.359381   30902 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:55:57.359520   30902 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:55:57.442289   30902 ssh_runner.go:195] Run: systemctl --version
	I0520 10:55:57.449225   30902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:55:57.464706   30902 kubeconfig.go:125] found "ha-570850" server: "https://192.168.39.254:8443"
	I0520 10:55:57.464734   30902 api_server.go:166] Checking apiserver status ...
	I0520 10:55:57.464764   30902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:55:57.481118   30902 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6020/cgroup
	W0520 10:55:57.490538   30902 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6020/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 10:55:57.490586   30902 ssh_runner.go:195] Run: ls
	I0520 10:55:57.495686   30902 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:56:02.496348   30902 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 10:56:02.496419   30902 retry.go:31] will retry after 285.383771ms: state is "Stopped"
	I0520 10:56:02.782936   30902 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:56:07.783617   30902 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0520 10:56:07.783727   30902 retry.go:31] will retry after 366.482643ms: state is "Stopped"
	I0520 10:56:08.151077   30902 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:56:08.877192   30902 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0520 10:56:08.877260   30902 retry.go:31] will retry after 313.199452ms: state is "Stopped"
	I0520 10:56:09.190713   30902 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0520 10:56:12.269276   30902 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0520 10:56:12.269320   30902 status.go:422] ha-570850 apiserver status = Running (err=<nil>)
	I0520 10:56:12.269338   30902 status.go:257] ha-570850 status: &{Name:ha-570850 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:56:12.269363   30902 status.go:255] checking status of ha-570850-m02 ...
	I0520 10:56:12.269697   30902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:56:12.269735   30902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:56:12.284335   30902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
	I0520 10:56:12.284746   30902 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:56:12.285277   30902 main.go:141] libmachine: Using API Version  1
	I0520 10:56:12.285299   30902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:56:12.285643   30902 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:56:12.285822   30902 main.go:141] libmachine: (ha-570850-m02) Calling .GetState
	I0520 10:56:12.287324   30902 status.go:330] ha-570850-m02 host status = "Running" (err=<nil>)
	I0520 10:56:12.287350   30902 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:56:12.287616   30902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:56:12.287646   30902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:56:12.301666   30902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34191
	I0520 10:56:12.302063   30902 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:56:12.302528   30902 main.go:141] libmachine: Using API Version  1
	I0520 10:56:12.302557   30902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:56:12.302848   30902 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:56:12.303022   30902 main.go:141] libmachine: (ha-570850-m02) Calling .GetIP
	I0520 10:56:12.305343   30902 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:56:12.305820   30902 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:50:46 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:56:12.305847   30902 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:56:12.305993   30902 host.go:66] Checking if "ha-570850-m02" exists ...
	I0520 10:56:12.306273   30902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:56:12.306304   30902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:56:12.320015   30902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0520 10:56:12.320340   30902 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:56:12.320800   30902 main.go:141] libmachine: Using API Version  1
	I0520 10:56:12.320824   30902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:56:12.321153   30902 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:56:12.321344   30902 main.go:141] libmachine: (ha-570850-m02) Calling .DriverName
	I0520 10:56:12.321527   30902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:56:12.321549   30902 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHHostname
	I0520 10:56:12.324034   30902 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:56:12.324496   30902 main.go:141] libmachine: (ha-570850-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:04:25", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:50:46 +0000 UTC Type:0 Mac:52:54:00:02:04:25 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-570850-m02 Clientid:01:52:54:00:02:04:25}
	I0520 10:56:12.324551   30902 main.go:141] libmachine: (ha-570850-m02) DBG | domain ha-570850-m02 has defined IP address 192.168.39.111 and MAC address 52:54:00:02:04:25 in network mk-ha-570850
	I0520 10:56:12.324620   30902 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHPort
	I0520 10:56:12.324848   30902 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHKeyPath
	I0520 10:56:12.325028   30902 main.go:141] libmachine: (ha-570850-m02) Calling .GetSSHUsername
	I0520 10:56:12.325192   30902 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850-m02/id_rsa Username:docker}
	W0520 10:56:30.733120   30902 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.111:22: connect: no route to host
	W0520 10:56:30.733204   30902 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0520 10:56:30.733221   30902 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:56:30.733229   30902 status.go:257] ha-570850-m02 status: &{Name:ha-570850-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0520 10:56:30.733246   30902 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	I0520 10:56:30.733258   30902 status.go:255] checking status of ha-570850-m04 ...
	I0520 10:56:30.733702   30902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:56:30.733759   30902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:56:30.748550   30902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41471
	I0520 10:56:30.749036   30902 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:56:30.749533   30902 main.go:141] libmachine: Using API Version  1
	I0520 10:56:30.749555   30902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:56:30.749875   30902 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:56:30.750093   30902 main.go:141] libmachine: (ha-570850-m04) Calling .GetState
	I0520 10:56:30.751662   30902 status.go:330] ha-570850-m04 host status = "Stopped" (err=<nil>)
	I0520 10:56:30.751678   30902 status.go:343] host is not running, skipping remaining checks
	I0520 10:56:30.751686   30902 status.go:257] ha-570850-m04 status: &{Name:ha-570850-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:546: status says there are running hosts: args "out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr": ha-570850
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-570850-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-570850-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr": ha-570850
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-570850-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-570850-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr": ha-570850
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-570850-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-570850-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-570850 -n ha-570850
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-570850 -n ha-570850: exit status 2 (15.595166897s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-570850 logs -n 25: (1.343225952s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-570850 ssh -n ha-570850-m02 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m03_ha-570850-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04:/home/docker/cp-test_ha-570850-m03_ha-570850-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m04 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m03_ha-570850-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp testdata/cp-test.txt                                                | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2002648180/001/cp-test_ha-570850-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850:/home/docker/cp-test_ha-570850-m04_ha-570850.txt                       |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850 sudo cat                                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850.txt                                 |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m02:/home/docker/cp-test_ha-570850-m04_ha-570850-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m02 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m03:/home/docker/cp-test_ha-570850-m04_ha-570850-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n                                                                 | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | ha-570850-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-570850 ssh -n ha-570850-m03 sudo cat                                          | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC | 20 May 24 10:43 UTC |
	|         | /home/docker/cp-test_ha-570850-m04_ha-570850-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-570850 node stop m02 -v=7                                                     | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-570850 node start m02 -v=7                                                    | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-570850 -v=7                                                           | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-570850 -v=7                                                                | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-570850 --wait=true -v=7                                                    | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-570850                                                                | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:53 UTC |                     |
	| node    | ha-570850 node delete m03 -v=7                                                   | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:53 UTC | 20 May 24 10:53 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-570850 stop -v=7                                                              | ha-570850 | jenkins | v1.33.1 | 20 May 24 10:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:49:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:49:00.810311   28545 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:49:00.810557   28545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:49:00.810568   28545 out.go:304] Setting ErrFile to fd 2...
	I0520 10:49:00.810574   28545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:49:00.810762   28545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:49:00.811306   28545 out.go:298] Setting JSON to false
	I0520 10:49:00.812227   28545 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1884,"bootTime":1716200257,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 10:49:00.812278   28545 start.go:139] virtualization: kvm guest
	I0520 10:49:00.814785   28545 out.go:177] * [ha-570850] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 10:49:00.816402   28545 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:49:00.816404   28545 notify.go:220] Checking for updates...
	I0520 10:49:00.817786   28545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:49:00.819143   28545 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:49:00.820484   28545 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:49:00.821703   28545 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 10:49:00.822978   28545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:49:00.824648   28545 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:49:00.824745   28545 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:49:00.825258   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:49:00.825308   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:49:00.841522   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I0520 10:49:00.841935   28545 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:49:00.842526   28545 main.go:141] libmachine: Using API Version  1
	I0520 10:49:00.842555   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:49:00.842946   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:49:00.843128   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:49:00.877361   28545 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 10:49:00.878748   28545 start.go:297] selected driver: kvm2
	I0520 10:49:00.878771   28545 start.go:901] validating driver "kvm2" against &{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:49:00.878981   28545 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:49:00.879321   28545 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:49:00.879400   28545 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 10:49:00.893725   28545 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 10:49:00.894355   28545 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:49:00.894384   28545 cni.go:84] Creating CNI manager for ""
	I0520 10:49:00.894392   28545 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 10:49:00.894441   28545 start.go:340] cluster config:
	{Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:49:00.894551   28545 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:49:00.896612   28545 out.go:177] * Starting "ha-570850" primary control-plane node in "ha-570850" cluster
	I0520 10:49:00.897801   28545 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:49:00.897833   28545 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 10:49:00.897846   28545 cache.go:56] Caching tarball of preloaded images
	I0520 10:49:00.897966   28545 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 10:49:00.897980   28545 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 10:49:00.898144   28545 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/config.json ...
	I0520 10:49:00.898400   28545 start.go:360] acquireMachinesLock for ha-570850: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 10:49:00.898447   28545 start.go:364] duration metric: took 28.765µs to acquireMachinesLock for "ha-570850"
	I0520 10:49:00.898467   28545 start.go:96] Skipping create...Using existing machine configuration
	I0520 10:49:00.898476   28545 fix.go:54] fixHost starting: 
	I0520 10:49:00.898758   28545 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:49:00.898808   28545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:49:00.912154   28545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0520 10:49:00.912545   28545 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:49:00.913059   28545 main.go:141] libmachine: Using API Version  1
	I0520 10:49:00.913080   28545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:49:00.913373   28545 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:49:00.913661   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:49:00.913859   28545 main.go:141] libmachine: (ha-570850) Calling .GetState
	I0520 10:49:00.915287   28545 fix.go:112] recreateIfNeeded on ha-570850: state=Running err=<nil>
	W0520 10:49:00.915306   28545 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 10:49:00.917201   28545 out.go:177] * Updating the running kvm2 "ha-570850" VM ...
	I0520 10:49:00.918411   28545 machine.go:94] provisionDockerMachine start ...
	I0520 10:49:00.918427   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:49:00.918614   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:00.921039   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:00.921498   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:00.921531   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:00.921695   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:00.921876   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:00.922030   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:00.922197   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:00.922398   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:00.922593   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:00.922606   28545 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 10:49:01.030186   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850
	
	I0520 10:49:01.030212   28545 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:49:01.030430   28545 buildroot.go:166] provisioning hostname "ha-570850"
	I0520 10:49:01.030458   28545 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:49:01.030664   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.033274   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.033744   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.033768   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.033918   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.034109   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.034287   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.034442   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.034599   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:01.034836   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:01.034853   28545 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-570850 && echo "ha-570850" | sudo tee /etc/hostname
	I0520 10:49:01.156409   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-570850
	
	I0520 10:49:01.156435   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.159025   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.159417   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.159442   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.159603   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.159765   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.159884   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.159977   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.160116   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:01.160263   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:01.160277   28545 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-570850' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-570850/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-570850' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:49:01.267106   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:49:01.267139   28545 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 10:49:01.267167   28545 buildroot.go:174] setting up certificates
	I0520 10:49:01.267174   28545 provision.go:84] configureAuth start
	I0520 10:49:01.267182   28545 main.go:141] libmachine: (ha-570850) Calling .GetMachineName
	I0520 10:49:01.267438   28545 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:49:01.270043   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.270426   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.270462   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.270636   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.272701   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.273087   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.273114   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.273209   28545 provision.go:143] copyHostCerts
	I0520 10:49:01.273239   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:49:01.273270   28545 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 10:49:01.273284   28545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 10:49:01.273361   28545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 10:49:01.273463   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:49:01.273485   28545 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 10:49:01.273492   28545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 10:49:01.273528   28545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 10:49:01.273608   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:49:01.273631   28545 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 10:49:01.273640   28545 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 10:49:01.273674   28545 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 10:49:01.273756   28545 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.ha-570850 san=[127.0.0.1 192.168.39.152 ha-570850 localhost minikube]
	I0520 10:49:01.408798   28545 provision.go:177] copyRemoteCerts
	I0520 10:49:01.408849   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:49:01.408885   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.411703   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.412078   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.412113   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.412257   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.412414   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.412576   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.412668   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:49:01.496371   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 10:49:01.496434   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 10:49:01.524309   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 10:49:01.524367   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:49:01.550409   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 10:49:01.550477   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0520 10:49:01.574937   28545 provision.go:87] duration metric: took 307.75201ms to configureAuth
	I0520 10:49:01.574962   28545 buildroot.go:189] setting minikube options for container-runtime
	I0520 10:49:01.575171   28545 config.go:182] Loaded profile config "ha-570850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:49:01.575233   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:49:01.577826   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.578239   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:49:01.578268   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:49:01.578468   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:49:01.578697   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.578855   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:49:01.579015   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:49:01.579170   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:49:01.579333   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:49:01.579356   28545 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 10:50:32.380185   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 10:50:32.380214   28545 machine.go:97] duration metric: took 1m31.461789949s to provisionDockerMachine
	I0520 10:50:32.380235   28545 start.go:293] postStartSetup for "ha-570850" (driver="kvm2")
	I0520 10:50:32.380249   28545 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:50:32.380272   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.380603   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:50:32.380647   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.383513   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.383918   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.383943   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.384065   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.384230   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.384381   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.384523   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:50:32.468651   28545 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:50:32.473075   28545 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 10:50:32.473099   28545 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 10:50:32.473170   28545 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 10:50:32.473245   28545 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 10:50:32.473254   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /etc/ssl/certs/111622.pem
	I0520 10:50:32.473344   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 10:50:32.483198   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:50:32.507373   28545 start.go:296] duration metric: took 127.126357ms for postStartSetup
	I0520 10:50:32.507413   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.507673   28545 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0520 10:50:32.507693   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.510108   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.510473   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.510497   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.510642   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.510839   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.510983   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.511169   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	W0520 10:50:32.595598   28545 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0520 10:50:32.595620   28545 fix.go:56] duration metric: took 1m31.697144912s for fixHost
	I0520 10:50:32.595647   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.598084   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.598387   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.598424   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.598549   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.598816   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.599002   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.599162   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.599307   28545 main.go:141] libmachine: Using SSH client type: native
	I0520 10:50:32.599465   28545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.152 22 <nil> <nil>}
	I0520 10:50:32.599475   28545 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 10:50:32.706542   28545 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716202232.681128286
	
	I0520 10:50:32.706566   28545 fix.go:216] guest clock: 1716202232.681128286
	I0520 10:50:32.706575   28545 fix.go:229] Guest: 2024-05-20 10:50:32.681128286 +0000 UTC Remote: 2024-05-20 10:50:32.595625893 +0000 UTC m=+91.817262454 (delta=85.502393ms)
	I0520 10:50:32.706607   28545 fix.go:200] guest clock delta is within tolerance: 85.502393ms
	I0520 10:50:32.706616   28545 start.go:83] releasing machines lock for "ha-570850", held for 1m31.808156278s
	I0520 10:50:32.706633   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.706850   28545 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:50:32.709202   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.709544   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.709569   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.709734   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.710174   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.710331   28545 main.go:141] libmachine: (ha-570850) Calling .DriverName
	I0520 10:50:32.710424   28545 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:50:32.710473   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.710479   28545 ssh_runner.go:195] Run: cat /version.json
	I0520 10:50:32.710495   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHHostname
	I0520 10:50:32.712748   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713063   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713193   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.713216   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713322   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.713480   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:32.713491   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.713507   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:32.713666   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.713676   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHPort
	I0520 10:50:32.713846   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHKeyPath
	I0520 10:50:32.713854   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	I0520 10:50:32.713963   28545 main.go:141] libmachine: (ha-570850) Calling .GetSSHUsername
	I0520 10:50:32.714072   28545 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/ha-570850/id_rsa Username:docker}
	W0520 10:50:32.810770   28545 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 10:50:32.810863   28545 ssh_runner.go:195] Run: systemctl --version
	I0520 10:50:32.817315   28545 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 10:50:32.989007   28545 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 10:50:32.995244   28545 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 10:50:32.995308   28545 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:50:33.005340   28545 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 10:50:33.005361   28545 start.go:494] detecting cgroup driver to use...
	I0520 10:50:33.005418   28545 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 10:50:33.023319   28545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 10:50:33.037817   28545 docker.go:217] disabling cri-docker service (if available) ...
	I0520 10:50:33.037877   28545 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 10:50:33.052368   28545 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 10:50:33.066569   28545 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 10:50:33.215546   28545 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 10:50:33.358676   28545 docker.go:233] disabling docker service ...
	I0520 10:50:33.358732   28545 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 10:50:33.375689   28545 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 10:50:33.389459   28545 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 10:50:33.538495   28545 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 10:50:33.698476   28545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 10:50:33.715452   28545 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:50:33.734625   28545 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 10:50:33.734681   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.745464   28545 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 10:50:33.745532   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.756460   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.767137   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.777739   28545 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:50:33.788287   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.798513   28545 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.809204   28545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 10:50:33.819733   28545 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:50:33.829957   28545 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:50:33.839650   28545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:50:33.982271   28545 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 10:50:34.731686   28545 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 10:50:34.731752   28545 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 10:50:34.736749   28545 start.go:562] Will wait 60s for crictl version
	I0520 10:50:34.736814   28545 ssh_runner.go:195] Run: which crictl
	I0520 10:50:34.740764   28545 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:50:34.788346   28545 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 10:50:34.788434   28545 ssh_runner.go:195] Run: crio --version
	I0520 10:50:34.817827   28545 ssh_runner.go:195] Run: crio --version
	I0520 10:50:34.851583   28545 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 10:50:34.852979   28545 main.go:141] libmachine: (ha-570850) Calling .GetIP
	I0520 10:50:34.855568   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:34.855886   28545 main.go:141] libmachine: (ha-570850) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:5a:58", ip: ""} in network mk-ha-570850: {Iface:virbr1 ExpiryTime:2024-05-20 11:38:26 +0000 UTC Type:0 Mac:52:54:00:8e:5a:58 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:ha-570850 Clientid:01:52:54:00:8e:5a:58}
	I0520 10:50:34.855910   28545 main.go:141] libmachine: (ha-570850) DBG | domain ha-570850 has defined IP address 192.168.39.152 and MAC address 52:54:00:8e:5a:58 in network mk-ha-570850
	I0520 10:50:34.856082   28545 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 10:50:34.860885   28545 kubeadm.go:877] updating cluster {Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 10:50:34.861058   28545 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:50:34.861115   28545 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:50:34.904271   28545 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:50:34.904291   28545 crio.go:433] Images already preloaded, skipping extraction
	I0520 10:50:34.904333   28545 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 10:50:34.939632   28545 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 10:50:34.939656   28545 cache_images.go:84] Images are preloaded, skipping loading
	I0520 10:50:34.939667   28545 kubeadm.go:928] updating node { 192.168.39.152 8443 v1.30.1 crio true true} ...
	I0520 10:50:34.939770   28545 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-570850 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:50:34.939868   28545 ssh_runner.go:195] Run: crio config
	I0520 10:50:34.996367   28545 cni.go:84] Creating CNI manager for ""
	I0520 10:50:34.996383   28545 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0520 10:50:34.996396   28545 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 10:50:34.996422   28545 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.152 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-570850 NodeName:ha-570850 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 10:50:34.996550   28545 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-570850"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 10:50:34.996568   28545 kube-vip.go:115] generating kube-vip config ...
	I0520 10:50:34.996616   28545 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0520 10:50:35.009156   28545 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0520 10:50:35.009260   28545 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0520 10:50:35.009315   28545 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:50:35.019441   28545 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 10:50:35.019484   28545 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0520 10:50:35.029250   28545 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0520 10:50:35.045718   28545 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:50:35.062017   28545 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0520 10:50:35.078117   28545 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0520 10:50:35.094874   28545 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0520 10:50:35.099790   28545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:50:35.242816   28545 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:50:35.258353   28545 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850 for IP: 192.168.39.152
	I0520 10:50:35.258374   28545 certs.go:194] generating shared ca certs ...
	I0520 10:50:35.258393   28545 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:50:35.258537   28545 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 10:50:35.258592   28545 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 10:50:35.258615   28545 certs.go:256] generating profile certs ...
	I0520 10:50:35.258705   28545 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/client.key
	I0520 10:50:35.258743   28545 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f
	I0520 10:50:35.258789   28545 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.152 192.168.39.111 192.168.39.240 192.168.39.254]
	I0520 10:50:35.323349   28545 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f ...
	I0520 10:50:35.323379   28545 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f: {Name:mk80a84971cac50baec0222bf4535f70c6381cb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:50:35.323558   28545 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f ...
	I0520 10:50:35.323576   28545 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f: {Name:mk3eecf14b574a3dc74183c4476fb213e98e80f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:50:35.323674   28545 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt.c79f885f -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt
	I0520 10:50:35.323862   28545 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key.c79f885f -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key
	I0520 10:50:35.324034   28545 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key
	I0520 10:50:35.324052   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 10:50:35.324068   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 10:50:35.324091   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 10:50:35.324110   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 10:50:35.324129   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 10:50:35.324149   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 10:50:35.324168   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 10:50:35.324186   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 10:50:35.324246   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 10:50:35.324285   28545 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 10:50:35.324301   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 10:50:35.324334   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:50:35.324365   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:50:35.324401   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 10:50:35.324453   28545 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 10:50:35.324492   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem -> /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.324512   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.324531   28545 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.325124   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:50:35.350917   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:50:35.375112   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:50:35.398626   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 10:50:35.422688   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 10:50:35.447172   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 10:50:35.470738   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:50:35.553009   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/ha-570850/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:50:35.610723   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 10:50:35.648271   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 10:50:35.677258   28545 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:50:35.710822   28545 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 10:50:35.733493   28545 ssh_runner.go:195] Run: openssl version
	I0520 10:50:35.740566   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:50:35.753475   28545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.759848   28545 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.759889   28545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:50:35.765875   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:50:35.782991   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 10:50:35.798421   28545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.805287   28545 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.805341   28545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 10:50:35.819309   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 10:50:35.829389   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 10:50:35.840391   28545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.844907   28545 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.844994   28545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 10:50:35.850890   28545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 10:50:35.860742   28545 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:50:35.865350   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 10:50:35.871853   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 10:50:35.878065   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 10:50:35.883542   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 10:50:35.889358   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 10:50:35.895053   28545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 10:50:35.900431   28545 kubeadm.go:391] StartCluster: {Name:ha-570850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-570850 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.152 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.111 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.87 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:50:35.900524   28545 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 10:50:35.900574   28545 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 10:50:35.939457   28545 cri.go:89] found id: "0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52"
	I0520 10:50:35.939478   28545 cri.go:89] found id: "1554ca782d2e3f7c42b06a3c0e74531de3d26d0ca5fe60fa686a7b6afa191326"
	I0520 10:50:35.939484   28545 cri.go:89] found id: "e07aed17013f0a9702039b0e913a8f3ee47ae3a2c5783ffb5c6d19619cbaf058"
	I0520 10:50:35.939489   28545 cri.go:89] found id: "53da5ef1aad0da9abe35a5f2db0970025b8855cb63c2361b1c617a487c1b0156"
	I0520 10:50:35.939493   28545 cri.go:89] found id: "a996ad2d62ca76ff1ec8d79e5907a3dccdc7fcf6178ff99795bd7bec168652ed"
	I0520 10:50:35.939497   28545 cri.go:89] found id: "5dfd2933bae47652d85545b3c50f13118d2b9b80e0d182e41b38e5e1fab415ee"
	I0520 10:50:35.939501   28545 cri.go:89] found id: "90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d"
	I0520 10:50:35.939504   28545 cri.go:89] found id: "db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c"
	I0520 10:50:35.939507   28545 cri.go:89] found id: "501666a770a6dd85fe8ebd88a3c25203861c54a857e25ce148e0857f6f855141"
	I0520 10:50:35.939513   28545 cri.go:89] found id: "2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0"
	I0520 10:50:35.939517   28545 cri.go:89] found id: "5497e7ab0c98629fad1acf76af134ab8564cfc623f5403e153cfa0fb0f8a2853"
	I0520 10:50:35.939520   28545 cri.go:89] found id: "3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d"
	I0520 10:50:35.939524   28545 cri.go:89] found id: "5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6"
	I0520 10:50:35.939528   28545 cri.go:89] found id: "43a07bf3d1ba60c5355e4d9f8c1e98191b6466fbc3d78317595a3faedd8aef68"
	I0520 10:50:35.939536   28545 cri.go:89] found id: "25ac8a29b1819345d9e9d1fa83c34bd7aa7a977396482f98e2d0513fca97cbab"
	I0520 10:50:35.939542   28545 cri.go:89] found id: ""
	I0520 10:50:35.939589   28545 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.726444921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5304400-dbd5-40b5-a2d0-27b4641c79f9 name=/runtime.v1.RuntimeService/Version
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.727524554Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4af36a0-ad84-45ea-a8d9-4f1cfa7029cc name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.728279166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202606728250963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4af36a0-ad84-45ea-a8d9-4f1cfa7029cc name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.729065300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ffcb36c9-f01f-422b-9945-4754911493c8 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.729122208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ffcb36c9-f01f-422b-9945-4754911493c8 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.729848000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8635c90b1b6ee69352b951e3824fe5d24a19c7540e652103d1dc63f80d294a8c,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716202559827079689,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb7c1d132c81d6f90b1b1bb59cdc645878e8efaa2bfa35d134081cd98636a6c,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716202540240305943,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0acb8703f244b9af6c148abd7ab111ff951ceaac9508a8636ae199a289410a8b,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716202528835001957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716202283824088424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a133506868eb80a3921a605af4867a241ec37844a0d40c6d7929551f6036748,PodSandboxId:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716202274235812612,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970cae7e58e20a58e4a502d5224ff66dffa08257d45b657546df98ba24596cb4,PodSandboxId:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716202255127800238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:2345cda026db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a,PodSandboxId:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716202240773018227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:e1428c77c0501a4cf5bf9de480e8d9be63ee566dc028a33b588ee385a9969d09,PodSandboxId:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202240804193061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2,PodSandboxId:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716202240754819007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4,PodSandboxId:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716202240711622122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716202240592985727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52,PodSandboxId:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202235709951425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPor
t\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716201751816995551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb
-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548128249776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548090990616,Labels:map[string]string{io.
kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716201546520879818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716201526502389608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c0
4ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716201526517116832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ffcb36c9-f01f-422b-9945-4754911493c8 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.775511159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=127ae20f-bb53-485f-945d-1d4e3d8cebf6 name=/runtime.v1.RuntimeService/Version
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.775589099Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=127ae20f-bb53-485f-945d-1d4e3d8cebf6 name=/runtime.v1.RuntimeService/Version
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.776806930Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f2ef04e-d540-44f8-b27c-aba55c4855b2 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.777501493Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202606777230851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f2ef04e-d540-44f8-b27c-aba55c4855b2 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.778448379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b3db073-31d5-4105-88e3-893c35614471 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.778512848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b3db073-31d5-4105-88e3-893c35614471 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.778995622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8635c90b1b6ee69352b951e3824fe5d24a19c7540e652103d1dc63f80d294a8c,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716202559827079689,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb7c1d132c81d6f90b1b1bb59cdc645878e8efaa2bfa35d134081cd98636a6c,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716202540240305943,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0acb8703f244b9af6c148abd7ab111ff951ceaac9508a8636ae199a289410a8b,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716202528835001957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716202283824088424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a133506868eb80a3921a605af4867a241ec37844a0d40c6d7929551f6036748,PodSandboxId:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716202274235812612,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970cae7e58e20a58e4a502d5224ff66dffa08257d45b657546df98ba24596cb4,PodSandboxId:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716202255127800238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:2345cda026db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a,PodSandboxId:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716202240773018227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:e1428c77c0501a4cf5bf9de480e8d9be63ee566dc028a33b588ee385a9969d09,PodSandboxId:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202240804193061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2,PodSandboxId:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716202240754819007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4,PodSandboxId:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716202240711622122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716202240592985727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52,PodSandboxId:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202235709951425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPor
t\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716201751816995551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb
-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548128249776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548090990616,Labels:map[string]string{io.
kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716201546520879818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716201526502389608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c0
4ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716201526517116832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b3db073-31d5-4105-88e3-893c35614471 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.814922658Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=276b1c02-5916-4076-95a0-100443d2c1f8 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.815217369Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-q9r5v,Uid:4d01afa5-94fb-411d-9823-db40ab44750a,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716202274039332952,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:42:27.662268802Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-570850,Uid:43a157a6ef6f6adedca3ec1b2efae360,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1716202255002806225,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{kubernetes.io/config.hash: 43a157a6ef6f6adedca3ec1b2efae360,kubernetes.io/config.seen: 2024-05-20T10:50:35.070701725Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&PodSandboxMetadata{Name:etcd-ha-570850,Uid:02c0f64d23f6631d30babf893433d524,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716202240273071323,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.152:2
379,kubernetes.io/config.hash: 02c0f64d23f6631d30babf893433d524,kubernetes.io/config.seen: 2024-05-20T10:38:52.776467350Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jw77g,Uid:85653365-561a-4097-bda9-77121fd12b00,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716202240272856561,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:39:07.602311201Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,Namespace
:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716202240271124878,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[
{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-20T10:39:08.909352809Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&PodSandboxMetadata{Name:kube-proxy-dnhm6,Uid:3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716202240258065455,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:39:05.841162047Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-570850,U
id:82dfc407eb7ba91cf1e2d56c64f57aae,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716202240248622218,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 82dfc407eb7ba91cf1e2d56c64f57aae,kubernetes.io/config.seen: 2024-05-20T10:38:52.776471907Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-570850,Uid:ac9b5b567925629a181fdaef05d3ba36,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716202240237480306,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ac9b5b567925629a181fdaef05d3ba36,kubernetes.io/config.seen: 2024-05-20T10:38:52.776472900Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-570850,Uid:701ae10dab129a58e87d4c5965df6fc9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716202240218425817,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.152:8443,kubernetes.io/config.hash: 701ae10dab129a58e87d4c5965df6fc9,kubernetes.io/config.seen: 2024-05-20T10:38:52.77647
0667Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&PodSandboxMetadata{Name:kindnet-2zphj,Uid:2ddea332-9ab2-445d-a26e-148ad8cf9a77,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716202240185072706,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:39:05.870164596Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xr965,Uid:e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716202235537502695,L
abels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T10:39:07.596065375Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=276b1c02-5916-4076-95a0-100443d2c1f8 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.815982260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0db6ed2-16ee-43b8-9c37-a19212824d9d name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.816058459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0db6ed2-16ee-43b8-9c37-a19212824d9d name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.816232338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716202283824088424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a133506868eb80a3921a605af4867a241ec37844a0d40c6d7929551f6036748,PodSandboxId:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716202274235812612,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970cae7e58e20a58e4a502d5224ff66dffa08257d45b657546df98ba24596cb4,PodSandboxId:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716202255127800238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2345cda026db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a,PodSandboxId:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716202240773018227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1428c77c0501a4cf5bf9de480e8d9be63ee566dc028a33b588ee385a9969d09,PodSandboxId:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202240804193061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2,PodSandboxId:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716202240754819007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4,PodSandboxId:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716202240711622122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCo
unt: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52,PodSandboxId:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202235709951425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0db6ed2-16ee-43b8-9c37-a19212824d9d name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.820517443Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cbf8ff9d-4512-4d4d-a3c6-a9059da910c9 name=/runtime.v1.RuntimeService/Version
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.820591940Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cbf8ff9d-4512-4d4d-a3c6-a9059da910c9 name=/runtime.v1.RuntimeService/Version
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.821578040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=add5176e-e71f-409e-8c83-0386c13f97ae name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.822033594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716202606822008465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144959,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=add5176e-e71f-409e-8c83-0386c13f97ae name=/runtime.v1.ImageService/ImageFsInfo
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.822534895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b20fb096-ca63-4728-9721-ae96696c2377 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.822605899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b20fb096-ca63-4728-9721-ae96696c2377 name=/runtime.v1.RuntimeService/ListContainers
	May 20 10:56:46 ha-570850 crio[3788]: time="2024-05-20 10:56:46.823102213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8635c90b1b6ee69352b951e3824fe5d24a19c7540e652103d1dc63f80d294a8c,PodSandboxId:ccad4bbeb746bfc50404a24ac9d8323ee5be19659b0d197a59662a373b727d2b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716202559827079689,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2zphj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddea332-9ab2-445d-a26e-148ad8cf9a77,},Annotations:map[string]string{io.kubernetes.container.hash: 58ad0bbd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fb7c1d132c81d6f90b1b1bb59cdc645878e8efaa2bfa35d134081cd98636a6c,PodSandboxId:eb560a5a0f57a5e3454255fffe1a1a4d60d88367acf1a430bd1d76665c77e5f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716202540240305943,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 701ae10dab129a58e87d4c5965df6fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 807e8444,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0acb8703f244b9af6c148abd7ab111ff951ceaac9508a8636ae199a289410a8b,PodSandboxId:6a973dfea8b686d55ca529bc6b8e4166bff7b2142d9c5f121b7eb4ddc98bbafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716202528835001957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9bb8bb,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716202283824088424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a133506868eb80a3921a605af4867a241ec37844a0d40c6d7929551f6036748,PodSandboxId:7e79c547e58b08b7586d3cef19ed1055d7eee92cde589e18d2ad99975bad3f27,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716202274235812612,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970cae7e58e20a58e4a502d5224ff66dffa08257d45b657546df98ba24596cb4,PodSandboxId:e59330b11d7146bddbb4c0d8e007f1a0a0838c79388de7596bf1cb74f7c4cc3c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716202255127800238,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43a157a6ef6f6adedca3ec1b2efae360,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:2345cda026db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a,PodSandboxId:760a55f5a3898089a1e72c0a76c1a8e595c3ff2ea95d00ad51db01f47d6d565c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716202240773018227,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:e1428c77c0501a4cf5bf9de480e8d9be63ee566dc028a33b588ee385a9969d09,PodSandboxId:f741cd63d21531353d4018d96a6134a90785ec7d1bc2e7edbaa52fa8b070b4d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202240804193061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annotations:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2,PodSandboxId:e837ff20e95ab83b895d71ec6046b7a4555f428ea44989836dc8b34759441cf3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716202240754819007,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4,PodSandboxId:880da8bf8800b3ee0911a93c24b0a42b9499717bb7e3686f424c5ab9b25d8c77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716202240711622122,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc,PodSandboxId:1d8c51dfd18bda2444377b18c869c0a7ebf7ad5207aadcb3be0e6a01b223400b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716202240592985727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dfc407eb7ba91cf1e2d56c64f57aae,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52,PodSandboxId:16a5e205296fb8498e901a8e3ddc6c2578c452e41c00d22505f71a5a87a53490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716202235709951425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPor
t\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ff013665ebf91eb4fae800bd48fa2efc3f8632dc07e7003ec82b5486c68ec19,PodSandboxId:1b009be6473f81849187f74f7da82b73377adf8976c1cb1f5792a6e615ea0d1d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716201751816995551,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-q9r5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d01afa5-94fb
-411d-9823-db40ab44750a,},Annotations:map[string]string{io.kubernetes.container.hash: 82ff3620,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d,PodSandboxId:7f39567bf239f0b09767df0a1eb8032b088b78754142a035e6e665f2fc46d607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548128249776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jw77g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85653365-561a-4097-bda9-77121fd12b00,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7b57622,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c,PodSandboxId:5862a2beb755cd5ed578a9a8e1b7da9616d971aaa740fe0bf2e784ffb311ebab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716201548090990616,Labels:map[string]string{io.
kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xr965,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23a9ea0-3c6a-405a-9e09-d814ca8adeb0,},Annotations:map[string]string{io.kubernetes.container.hash: b66f9063,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0,PodSandboxId:17d9c116389c0bdcd1100f80d59a82087af100368347a9eca0fe6cdc345f697b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716201546520879818,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dnhm6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a6da30f-f2cd-48fc-867e-b06a0a2f7fb1,},Annotations:map[string]string{io.kubernetes.container.hash: d4c4ff6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6,PodSandboxId:760d0e6343aaf5dbdaa6cae8529c4c2e8cae766b39adad9cab89ee6eac055a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716201526502389608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac9b5b567925629a181fdaef05d3ba36,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d,PodSandboxId:6e5a2b4300b4df655f3a33eac5388e2128e5320adf2f7e6802a69610ac9c2f4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c0
4ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716201526517116832,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-570850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c0f64d23f6631d30babf893433d524,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4812a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b20fb096-ca63-4728-9721-ae96696c2377 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8635c90b1b6ee       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      47 seconds ago       Exited              kindnet-cni               4                   ccad4bbeb746b       kindnet-2zphj
	1fb7c1d132c81       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Exited              kube-apiserver            4                   eb560a5a0f57a       kube-apiserver-ha-570850
	0acb8703f244b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       5                   6a973dfea8b68       storage-provisioner
	cbf4148192834       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      5 minutes ago        Running             kube-controller-manager   2                   1d8c51dfd18bd       kube-controller-manager-ha-570850
	4a133506868eb       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago        Running             busybox                   1                   7e79c547e58b0       busybox-fc5497c4f-q9r5v
	970cae7e58e20       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago        Running             kube-vip                  0                   e59330b11d714       kube-vip-ha-570850
	e1428c77c0501       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago        Running             coredns                   1                   f741cd63d2153       coredns-7db6d8ff4d-jw77g
	2345cda026db0       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      6 minutes ago        Running             kube-proxy                1                   760a55f5a3898       kube-proxy-dnhm6
	8478dc10333b9       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      6 minutes ago        Running             kube-scheduler            1                   e837ff20e95ab       kube-scheduler-ha-570850
	cb5165edef082       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago        Running             etcd                      1                   880da8bf8800b       etcd-ha-570850
	77bb623d31463       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      6 minutes ago        Exited              kube-controller-manager   1                   1d8c51dfd18bd       kube-controller-manager-ha-570850
	0ce23a0de515f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago        Running             coredns                   1                   16a5e205296fb       coredns-7db6d8ff4d-xr965
	7ff013665ebf9       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago       Exited              busybox                   0                   1b009be6473f8       busybox-fc5497c4f-q9r5v
	90d27af32d37b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago       Exited              coredns                   0                   7f39567bf239f       coredns-7db6d8ff4d-jw77g
	db09b659f2af2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago       Exited              coredns                   0                   5862a2beb755c       coredns-7db6d8ff4d-xr965
	2ab644050dd47       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      17 minutes ago       Exited              kube-proxy                0                   17d9c116389c0       kube-proxy-dnhm6
	3ed532de1dd4e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      18 minutes ago       Exited              etcd                      0                   6e5a2b4300b4d       etcd-ha-570850
	5f81e882715f6       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      18 minutes ago       Exited              kube-scheduler            0                   760d0e6343aaf       kube-scheduler-ha-570850
	
	
	==> coredns [0ce23a0de515fbdebdb3702d724ee4b29661b49f034f79cc451fb00a5e0f2b52] <==
	[INFO] plugin/kubernetes: Trace[1545876109]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:56:07.181) (total time: 11445ms):
	Trace[1545876109]: ---"Objects listed" error:Unauthorized 11445ms (10:56:18.626)
	Trace[1545876109]: [11.445707881s] [11.445707881s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[504819377]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:56:07.202) (total time: 11425ms):
	Trace[504819377]: ---"Objects listed" error:Unauthorized 11424ms (10:56:18.626)
	Trace[504819377]: [11.425005528s] [11.425005528s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2724": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2724": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2724": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2724": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2614": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[1699721391]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:56:24.276) (total time: 11640ms):
	Trace[1699721391]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2614": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF 11640ms (10:56:35.917)
	Trace[1699721391]: [11.640999882s] [11.640999882s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2614": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2635": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[2047119038]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:56:24.018) (total time: 11899ms):
	Trace[2047119038]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2635": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF 11899ms (10:56:35.917)
	Trace[2047119038]: [11.899664601s] [11.899664601s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2635": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2614": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2614": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [90d27af32d37b6d8b2c568730d3c13e099cc1fac0169af4fcb7a1d747387825d] <==
	[INFO] 10.244.1.2:48903 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120367s
	[INFO] 10.244.1.2:57868 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001671078s
	[INFO] 10.244.1.2:56995 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082609s
	[INFO] 10.244.2.2:58942 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706357s
	[INFO] 10.244.2.2:35126 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090874s
	[INFO] 10.244.2.2:37909 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008474s
	[INFO] 10.244.2.2:51462 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166312s
	[INFO] 10.244.2.2:33520 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093626s
	[INFO] 10.244.0.4:59833 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088933s
	[INFO] 10.244.1.2:59628 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093196s
	[INFO] 10.244.1.2:50071 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124886s
	[INFO] 10.244.1.2:42327 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143875s
	[INFO] 10.244.0.4:35557 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115628s
	[INFO] 10.244.0.4:36441 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148453s
	[INFO] 10.244.0.4:52951 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110826s
	[INFO] 10.244.0.4:41170 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075418s
	[INFO] 10.244.1.2:54236 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124156s
	[INFO] 10.244.1.2:55716 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008044s
	[INFO] 10.244.1.2:51672 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128329s
	[INFO] 10.244.2.2:44501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195618s
	[INFO] 10.244.2.2:48083 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105613s
	[INFO] 10.244.2.2:41793 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131302s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1920&timeout=5m33s&timeoutSeconds=333&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [db09b659f2af2a56483f90673543431bbecf9206d71afa28a395a96ebf802b1c] <==
	[INFO] 10.244.2.2:57122 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001355331s
	[INFO] 10.244.0.4:59866 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156011s
	[INFO] 10.244.0.4:56885 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009797641s
	[INFO] 10.244.0.4:37828 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138434s
	[INFO] 10.244.1.2:59957 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100136s
	[INFO] 10.244.1.2:41914 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180125s
	[INFO] 10.244.1.2:38547 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077887s
	[INFO] 10.244.1.2:35980 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172936s
	[INFO] 10.244.2.2:50204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019701s
	[INFO] 10.244.2.2:42765 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076194s
	[INFO] 10.244.2.2:44033 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001282105s
	[INFO] 10.244.0.4:49359 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121025s
	[INFO] 10.244.0.4:41717 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077487s
	[INFO] 10.244.0.4:44192 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062315s
	[INFO] 10.244.1.2:40081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176555s
	[INFO] 10.244.2.2:58201 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150016s
	[INFO] 10.244.2.2:56308 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145342s
	[INFO] 10.244.2.2:36926 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093685s
	[INFO] 10.244.2.2:42968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009681s
	[INFO] 10.244.1.2:34643 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000154857s
	[INFO] 10.244.2.2:49012 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281845s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1951&timeout=8m37s&timeoutSeconds=517&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1951&timeout=6m43s&timeoutSeconds=403&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [e1428c77c0501a4cf5bf9de480e8d9be63ee566dc028a33b588ee385a9969d09] <==
	Trace[1772806123]: ---"Objects listed" error:Unauthorized 12017ms (10:56:11.602)
	Trace[1772806123]: [12.017560674s] [12.017560674s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[235630736]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:55:59.453) (total time: 12149ms):
	Trace[235630736]: ---"Objects listed" error:Unauthorized 12149ms (10:56:11.602)
	Trace[235630736]: [12.149602168s] [12.149602168s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[632444678]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (20-May-2024 10:55:59.557) (total time: 12046ms):
	Trace[632444678]: ---"Objects listed" error:Unauthorized 12046ms (10:56:11.603)
	Trace[632444678]: [12.04632472s] [12.04632472s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2637": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2637": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2644": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2644": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2644": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2644": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2752": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=2752": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.905313] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.066522] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063920] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.182694] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.123506] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.262222] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.187463] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +4.199064] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.059504] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.313134] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.088050] kauditd_printk_skb: 79 callbacks suppressed
	[May20 10:39] kauditd_printk_skb: 21 callbacks suppressed
	[May20 10:41] kauditd_printk_skb: 72 callbacks suppressed
	[May20 10:50] systemd-fstab-generator[3707]: Ignoring "noauto" option for root device
	[  +0.137290] systemd-fstab-generator[3719]: Ignoring "noauto" option for root device
	[  +0.175518] systemd-fstab-generator[3733]: Ignoring "noauto" option for root device
	[  +0.150488] systemd-fstab-generator[3745]: Ignoring "noauto" option for root device
	[  +0.300141] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +1.258769] systemd-fstab-generator[3874]: Ignoring "noauto" option for root device
	[  +5.092805] kauditd_printk_skb: 132 callbacks suppressed
	[ +12.503054] kauditd_printk_skb: 76 callbacks suppressed
	[  +7.055284] kauditd_printk_skb: 2 callbacks suppressed
	[May20 10:51] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.452931] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [3ed532de1dd4e6ddc7e6b48445889a9ba20266069d486082fda5fa1ec1093c3d] <==
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/05/20 10:49:01 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-20T10:49:01.787883Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.152:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T10:49:01.787942Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.152:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T10:49:01.788008Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"900c4b71f7b778f3","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-20T10:49:01.788218Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.78829Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.788361Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.788449Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.788523Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.788598Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"900c4b71f7b778f3","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.78872Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c7f63ab15238d140"}
	{"level":"info","ts":"2024-05-20T10:49:01.78875Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.788786Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.78884Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.788972Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.789046Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.789118Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"900c4b71f7b778f3","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.789147Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8a9e64255fae650f"}
	{"level":"info","ts":"2024-05-20T10:49:01.792404Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.152:2380"}
	{"level":"info","ts":"2024-05-20T10:49:01.792487Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.152:2380"}
	{"level":"info","ts":"2024-05-20T10:49:01.792512Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-570850","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.152:2380"],"advertise-client-urls":["https://192.168.39.152:2379"]}
	
	
	==> etcd [cb5165edef0825fcfb61d758fe5e2ada5843adc43498b75ff1387199c0c33ff4] <==
	{"level":"info","ts":"2024-05-20T10:56:42.038031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:42.038066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 received MsgPreVoteResp from 900c4b71f7b778f3 at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:42.0381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 [logterm: 3, index: 3215] sent MsgPreVote request to c7f63ab15238d140 at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:43.037743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:43.037814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:43.037835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 received MsgPreVoteResp from 900c4b71f7b778f3 at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:43.037863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 [logterm: 3, index: 3215] sent MsgPreVote request to c7f63ab15238d140 at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:44.037998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:44.038038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:44.038052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 received MsgPreVoteResp from 900c4b71f7b778f3 at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:44.038067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 [logterm: 3, index: 3215] sent MsgPreVote request to c7f63ab15238d140 at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:45.037967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:45.038072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:45.038106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 received MsgPreVoteResp from 900c4b71f7b778f3 at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:45.038144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 [logterm: 3, index: 3215] sent MsgPreVote request to c7f63ab15238d140 at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:46.038181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:46.038288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:46.038322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 received MsgPreVoteResp from 900c4b71f7b778f3 at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:46.038369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 [logterm: 3, index: 3215] sent MsgPreVote request to c7f63ab15238d140 at term 3"}
	{"level":"warn","ts":"2024-05-20T10:56:46.468375Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c7f63ab15238d140","rtt":"10.789267ms","error":"dial tcp 192.168.39.111:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-20T10:56:46.475948Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c7f63ab15238d140","rtt":"1.146605ms","error":"dial tcp 192.168.39.111:2380: connect: no route to host"}
	{"level":"info","ts":"2024-05-20T10:56:47.038007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:47.038047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:47.038059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 received MsgPreVoteResp from 900c4b71f7b778f3 at term 3"}
	{"level":"info","ts":"2024-05-20T10:56:47.038074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"900c4b71f7b778f3 [logterm: 3, index: 3215] sent MsgPreVote request to c7f63ab15238d140 at term 3"}
	
	
	==> kernel <==
	 10:56:47 up 18 min,  0 users,  load average: 0.48, 0.63, 0.43
	Linux ha-570850 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8635c90b1b6ee69352b951e3824fe5d24a19c7540e652103d1dc63f80d294a8c] <==
	I0520 10:56:00.164632       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0520 10:56:00.164759       1 main.go:107] hostIP = 192.168.39.152
	podIP = 192.168.39.152
	I0520 10:56:00.164920       1 main.go:116] setting mtu 1500 for CNI 
	I0520 10:56:00.164959       1 main.go:146] kindnetd IP family: "ipv4"
	I0520 10:56:00.164997       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0520 10:56:02.125226       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 10:56:05.197033       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 10:56:18.626846       1 main.go:191] Failed to get nodes, retrying after error: Unauthorized
	I0520 10:56:23.629252       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0520 10:56:26.701077       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [1fb7c1d132c81d6f90b1b1bb59cdc645878e8efaa2bfa35d134081cd98636a6c] <==
	Trace[1340612280]: [12.984081441s] [12.984081441s] END
	E0520 10:56:32.606010       1 cacher.go:475] cacher (cronjobs.batch): unexpected ListAndWatch error: failed to list *batch.CronJob: etcdserver: request timed out; reinitializing...
	E0520 10:56:32.606051       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	I0520 10:56:32.606144       1 trace.go:236] Trace[1361861445]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:ca1c4bb0-4c85-4d31-9324-5d5167d45b0e,client:127.0.0.1,api-group:scheduling.k8s.io,api-version:v1,name:system-node-critical,subresource:,namespace:,protocol:HTTP/2.0,resource:priorityclasses,scope:resource,url:/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:GET (20-May-2024 10:56:18.628) (total time: 13977ms):
	Trace[1361861445]: [13.977378101s] [13.977378101s] END
	E0520 10:56:32.606476       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	W0520 10:56:32.606494       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: etcdserver: request timed out. Retrying...
	E0520 10:56:32.606539       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	F0520 10:56:32.606550       1 hooks.go:203] PostStartHook "scheduling/bootstrap-system-priority-classes" failed: unable to add default system priority classes: timed out waiting for the condition
	E0520 10:56:32.702549       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	I0520 10:56:32.702704       1 trace.go:236] Trace[1183673915]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:33cf9373-8fad-452d-b18b-4bc7a05f2b48,client:127.0.0.1,api-group:,api-version:v1,name:coredns,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/coredns,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:GET (20-May-2024 10:56:24.281) (total time: 8420ms):
	Trace[1183673915]: [8.420923027s] [8.420923027s] END
	E0520 10:56:32.702959       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	E0520 10:56:32.703048       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	E0520 10:56:32.703091       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	I0520 10:56:32.684030       1 trace.go:236] Trace[2135645571]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:d3b0027d-b67b-4081-938e-739a6df1df1e,client:127.0.0.1,api-group:,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:resourcequotas,scope:cluster,url:/api/v1/resourcequotas,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:LIST (20-May-2024 10:56:24.279) (total time: 8404ms):
	Trace[2135645571]: ["List(recursive=true) etcd3" audit-id:d3b0027d-b67b-4081-938e-739a6df1df1e,key:/resourcequotas,resourceVersion:0,resourceVersionMatch:,limit:500,continue: 8404ms (10:56:24.279)]
	Trace[2135645571]: [8.404071988s] [8.404071988s] END
	I0520 10:56:32.703188       1 trace.go:236] Trace[1923847868]: "List(recursive=true) etcd3" audit-id:,key:/rolebindings,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (20-May-2024 10:56:19.617) (total time: 13085ms):
	Trace[1923847868]: [13.085899963s] [13.085899963s] END
	W0520 10:56:32.703228       1 reflector.go:547] storage/cacher.go:/rolebindings: failed to list *rbac.RoleBinding: etcdserver: request timed out
	I0520 10:56:32.703263       1 trace.go:236] Trace[844373686]: "Reflector ListAndWatch" name:storage/cacher.go:/rolebindings (20-May-2024 10:56:19.617) (total time: 13086ms):
	Trace[844373686]: ---"Objects listed" error:etcdserver: request timed out 13086ms (10:56:32.703)
	Trace[844373686]: [13.086111628s] [13.086111628s] END
	E0520 10:56:32.703329       1 cacher.go:475] cacher (rolebindings.rbac.authorization.k8s.io): unexpected ListAndWatch error: failed to list *rbac.RoleBinding: etcdserver: request timed out; reinitializing...
	
	
	==> kube-controller-manager [77bb623d314633a6cedc036b9f8bfb8f0f87d26e1bafcfca59d693b0f972fadc] <==
	I0520 10:50:41.928307       1 serving.go:380] Generated self-signed cert in-memory
	I0520 10:50:42.230214       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0520 10:50:42.230314       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:50:42.231880       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0520 10:50:42.232770       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 10:50:42.232926       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 10:50:42.233019       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0520 10:51:03.094351       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.152:8443/healthz\": dial tcp 192.168.39.152:8443: connect: connection refused"
	
	
	==> kube-controller-manager [cbf414819283453347c675d5d68ad0041629e72fb0666d75ee136d21bc8547e9] <==
	E0520 10:56:36.605249       1 gc_controller.go:278] "Error while getting node" err="Get \"https://192.168.39.152:8443/api/v1/nodes/ha-570850-m03\": failed to get token for kube-system/pod-garbage-collector: timed out waiting for the condition" logger="pod-garbage-collector-controller" node="ha-570850-m03"
	W0520 10:56:36.718220       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.152:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:56:36.718291       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-570850"
	E0520 10:56:36.718308       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.152:8443/api/v1/nodes/ha-570850\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node=""
	W0520 10:56:36.718628       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.152:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:56:37.220079       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.152:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:56:38.221132       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.152:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:56:40.222542       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.152:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:56:40.222683       1 node_lifecycle_controller.go:973] "Error updating node" err="Put \"https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02/status\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node="ha-570850-m02"
	W0520 10:56:40.222897       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.152:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:56:40.223185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.IngressClass: Get "https://192.168.39.152:8443/apis/networking.k8s.io/v1/ingressclasses?resourceVersion=2644": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:56:40.223244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.IngressClass: failed to list *v1.IngressClass: Get "https://192.168.39.152:8443/apis/networking.k8s.io/v1/ingressclasses?resourceVersion=2644": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:56:40.723968       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.152:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:56:41.725372       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.152:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:56:42.635474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ConfigMap: Get "https://192.168.39.152:8443/api/v1/configmaps?resourceVersion=2639": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:56:42.635599       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.152:8443/api/v1/configmaps?resourceVersion=2639": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:56:43.248856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://192.168.39.152:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2639": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:56:43.248978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://192.168.39.152:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2639": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:56:43.726959       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.152:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:56:43.727031       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-570850-m02"
	E0520 10:56:43.727045       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.152:8443/api/v1/nodes/ha-570850-m02\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node=""
	W0520 10:56:44.723021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RoleBinding: Get "https://192.168.39.152:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?resourceVersion=2645": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:56:44.723149       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RoleBinding: failed to list *v1.RoleBinding: Get "https://192.168.39.152:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?resourceVersion=2645": dial tcp 192.168.39.152:8443: connect: connection refused
	W0520 10:56:46.156527       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: Get "https://192.168.39.152:8443/apis/apiregistration.k8s.io/v1/apiservices?resourceVersion=2637": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:56:46.156798       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: Get "https://192.168.39.152:8443/apis/apiregistration.k8s.io/v1/apiservices?resourceVersion=2637": dial tcp 192.168.39.152:8443: connect: connection refused
	
	
	==> kube-proxy [2345cda026db0c55b2c494dda0ccdc34462a17c763c18385f04624303c05334a] <==
	E0520 10:50:54.990510       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0520 10:51:07.278218       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0520 10:51:23.652078       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.152"]
	I0520 10:51:23.792928       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 10:51:23.793026       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 10:51:23.793049       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:51:23.795600       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:51:23.796001       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:51:23.796042       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:51:23.797889       1 config.go:192] "Starting service config controller"
	I0520 10:51:23.797947       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:51:23.797983       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:51:23.798003       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:51:23.798826       1 config.go:319] "Starting node config controller"
	I0520 10:51:23.798873       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:51:23.898722       1 shared_informer.go:320] Caches are synced for service config
	I0520 10:51:23.898763       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 10:51:23.899076       1 shared_informer.go:320] Caches are synced for node config
	E0520 10:55:53.614078       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=2613&timeout=9m42s&timeoutSeconds=582&watch=true": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:55:59.757337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2610&timeout=5m53s&timeoutSeconds=353&watch=true": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:56:18.190615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2724&timeout=6m49s&timeoutSeconds=409&watch=true": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:56:24.333241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=2613": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:56:24.333366       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=2613": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:56:36.621521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2610": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:56:36.621606       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2610": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [2ab644050dd47c5e87668a4607b1e8fa60adead4e59dc297cbc50cec945079b0] <==
	E0520 10:47:56.817798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:47:59.886237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:47:59.886296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:47:59.886743       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:47:59.886788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:02.958465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:02.958522       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:06.030750       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:06.030910       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:06.031048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:06.031088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:09.102907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:09.103139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:18.318620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:18.318779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:18.318918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:18.318956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:18.318989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:18.319040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:33.678915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:33.678996       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1944": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:39.822265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:39.822374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1932": dial tcp 192.168.39.254:8443: connect: no route to host
	W0520 10:48:42.893857       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	E0520 10:48:42.894425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-570850&resourceVersion=1950": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [5f81e882715f6525adc896a8c2aaf8e64fa4015a8d825065ea7fb4a6f097ded6] <==
	W0520 10:48:57.725415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 10:48:57.725683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 10:48:57.740415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 10:48:57.740553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 10:48:57.787073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:48:57.787239       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 10:48:57.832834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 10:48:57.832960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 10:48:58.121359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 10:48:58.121848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 10:48:58.122808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:48:58.122897       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:48:58.211914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:48:58.212150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:48:58.356603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:48:58.356831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 10:48:58.487408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 10:48:58.487464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 10:49:01.242093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 10:49:01.242122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 10:49:01.323331       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 10:49:01.323366       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 10:49:01.451021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:49:01.451066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:49:01.692998       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8478dc10333b9d68f6ff41c67e38f653df5a952ffec4e715f6c95b1f3dcbcfe2] <==
	E0520 10:56:18.444480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 10:56:18.586977       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 10:56:18.587030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 10:56:18.599348       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 10:56:18.599438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 10:56:19.653985       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 10:56:19.654081       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 10:56:20.563845       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 10:56:20.563898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 10:56:20.749884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 10:56:20.749993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 10:56:21.146849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:56:21.146961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 10:56:21.392504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 10:56:21.392740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 10:56:23.096985       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 10:56:23.097047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:56:23.263020       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 10:56:23.263070       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 10:56:25.134326       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 10:56:25.134384       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 10:56:26.429800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 10:56:26.429904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 10:56:39.414153       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.152:8443/api/v1/namespaces?resourceVersion=2708": dial tcp 192.168.39.152:8443: connect: connection refused
	E0520 10:56:39.414270       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.152:8443/api/v1/namespaces?resourceVersion=2708": dial tcp 192.168.39.152:8443: connect: connection refused
	
	
	==> kubelet <==
	May 20 10:56:36 ha-570850 kubelet[1371]: E0520 10:56:36.621967    1371 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)coredns&resourceVersion=2707": dial tcp 192.168.39.254:8443: connect: no route to host
	May 20 10:56:36 ha-570850 kubelet[1371]: W0520 10:56:36.621101    1371 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=2595": dial tcp 192.168.39.254:8443: connect: no route to host
	May 20 10:56:36 ha-570850 kubelet[1371]: E0520 10:56:36.621327    1371 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-570850\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 20 10:56:36 ha-570850 kubelet[1371]: E0520 10:56:36.622263    1371 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=2595": dial tcp 192.168.39.254:8443: connect: no route to host
	May 20 10:56:36 ha-570850 kubelet[1371]: W0520 10:56:36.621262    1371 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=2529": dial tcp 192.168.39.254:8443: connect: no route to host
	May 20 10:56:36 ha-570850 kubelet[1371]: E0520 10:56:36.622874    1371 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=2529": dial tcp 192.168.39.254:8443: connect: no route to host
	May 20 10:56:37 ha-570850 kubelet[1371]: I0520 10:56:37.814129    1371 scope.go:117] "RemoveContainer" containerID="0acb8703f244b9af6c148abd7ab111ff951ceaac9508a8636ae199a289410a8b"
	May 20 10:56:37 ha-570850 kubelet[1371]: E0520 10:56:37.814848    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8cbdeaca-631d-449f-bb63-9ee4ec3f7a36)\"" pod="kube-system/storage-provisioner" podUID="8cbdeaca-631d-449f-bb63-9ee4ec3f7a36"
	May 20 10:56:38 ha-570850 kubelet[1371]: I0520 10:56:38.391922    1371 scope.go:117] "RemoveContainer" containerID="1fb7c1d132c81d6f90b1b1bb59cdc645878e8efaa2bfa35d134081cd98636a6c"
	May 20 10:56:38 ha-570850 kubelet[1371]: E0520 10:56:38.392915    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-570850_kube-system(701ae10dab129a58e87d4c5965df6fc9)\"" pod="kube-system/kube-apiserver-ha-570850" podUID="701ae10dab129a58e87d4c5965df6fc9"
	May 20 10:56:39 ha-570850 kubelet[1371]: E0520 10:56:39.693134    1371 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-570850\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-570850?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 20 10:56:39 ha-570850 kubelet[1371]: E0520 10:56:39.693530    1371 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 20 10:56:39 ha-570850 kubelet[1371]: I0520 10:56:39.693719    1371 status_manager.go:853] "Failed to get status for pod" podUID="701ae10dab129a58e87d4c5965df6fc9" pod="kube-system/kube-apiserver-ha-570850" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-570850\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 20 10:56:40 ha-570850 kubelet[1371]: I0520 10:56:40.147272    1371 scope.go:117] "RemoveContainer" containerID="1fb7c1d132c81d6f90b1b1bb59cdc645878e8efaa2bfa35d134081cd98636a6c"
	May 20 10:56:40 ha-570850 kubelet[1371]: E0520 10:56:40.147734    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-570850_kube-system(701ae10dab129a58e87d4c5965df6fc9)\"" pod="kube-system/kube-apiserver-ha-570850" podUID="701ae10dab129a58e87d4c5965df6fc9"
	May 20 10:56:42 ha-570850 kubelet[1371]: E0520 10:56:42.765175    1371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-570850?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	May 20 10:56:42 ha-570850 kubelet[1371]: W0520 10:56:42.765181    1371 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2690": dial tcp 192.168.39.254:8443: connect: no route to host
	May 20 10:56:42 ha-570850 kubelet[1371]: E0520 10:56:42.765285    1371 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/storage-provisioner.17d12cdb22bbe144\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{storage-provisioner.17d12cdb22bbe144  kube-system   2299 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:storage-provisioner,UID:8cbdeaca-631d-449f-bb63-9ee4ec3f7a36,APIVersion:v1,ResourceVersion:415,FieldPath:spec.containers{storage-provisioner},},Reason:BackOff,Message:Back-off restarting failed container storage-provisioner in pod storage-provisioner_kube-system(8cbdeaca-631d-449f-bb63-9ee4ec3f7a36),Source:EventSource{Component:kubelet,Host:ha-570850,},FirstTimestamp:2024-05-20 10:50:52 +0000 UTC,LastTimestamp:2024-05-20 10:54:08.561352806 +0000 UTC m=+915.872572183,
Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-570850,}"
	May 20 10:56:42 ha-570850 kubelet[1371]: I0520 10:56:42.765752    1371 status_manager.go:853] "Failed to get status for pod" podUID="2ddea332-9ab2-445d-a26e-148ad8cf9a77" pod="kube-system/kindnet-2zphj" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-2zphj\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 20 10:56:42 ha-570850 kubelet[1371]: E0520 10:56:42.765833    1371 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2690": dial tcp 192.168.39.254:8443: connect: no route to host
	May 20 10:56:42 ha-570850 kubelet[1371]: W0520 10:56:42.765486    1371 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2551": dial tcp 192.168.39.254:8443: connect: no route to host
	May 20 10:56:42 ha-570850 kubelet[1371]: E0520 10:56:42.766532    1371 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2551": dial tcp 192.168.39.254:8443: connect: no route to host
	May 20 10:56:45 ha-570850 kubelet[1371]: I0520 10:56:45.814092    1371 scope.go:117] "RemoveContainer" containerID="8635c90b1b6ee69352b951e3824fe5d24a19c7540e652103d1dc63f80d294a8c"
	May 20 10:56:45 ha-570850 kubelet[1371]: E0520 10:56:45.814348    1371 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-2zphj_kube-system(2ddea332-9ab2-445d-a26e-148ad8cf9a77)\"" pod="kube-system/kindnet-2zphj" podUID="2ddea332-9ab2-445d-a26e-148ad8cf9a77"
	May 20 10:56:45 ha-570850 kubelet[1371]: I0520 10:56:45.837132    1371 status_manager.go:853] "Failed to get status for pod" podUID="8cbdeaca-631d-449f-bb63-9ee4ec3f7a36" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 10:56:46.413278   31148 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18925-3819/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-570850 -n ha-570850
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-570850 -n ha-570850: exit status 2 (214.010247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-570850" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (172.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (304.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-563238
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-563238
E0520 11:12:19.192515   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-563238: exit status 82 (2m1.972121815s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-563238-m03"  ...
	* Stopping node "multinode-563238-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-563238" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-563238 --wait=true -v=8 --alsologtostderr
E0520 11:14:54.310326   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 11:15:22.237264   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-563238 --wait=true -v=8 --alsologtostderr: (3m0.06418698s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-563238
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-563238 -n multinode-563238
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-563238 logs -n 25: (1.551388066s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp multinode-563238-m02:/home/docker/cp-test.txt                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2129143398/001/cp-test_multinode-563238-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp multinode-563238-m02:/home/docker/cp-test.txt                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238:/home/docker/cp-test_multinode-563238-m02_multinode-563238.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n multinode-563238 sudo cat                                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-563238-m02_multinode-563238.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp multinode-563238-m02:/home/docker/cp-test.txt                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m03:/home/docker/cp-test_multinode-563238-m02_multinode-563238-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n multinode-563238-m03 sudo cat                                   | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-563238-m02_multinode-563238-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp testdata/cp-test.txt                                                | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp multinode-563238-m03:/home/docker/cp-test.txt                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2129143398/001/cp-test_multinode-563238-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp multinode-563238-m03:/home/docker/cp-test.txt                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238:/home/docker/cp-test_multinode-563238-m03_multinode-563238.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n multinode-563238 sudo cat                                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-563238-m03_multinode-563238.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp multinode-563238-m03:/home/docker/cp-test.txt                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m02:/home/docker/cp-test_multinode-563238-m03_multinode-563238-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n multinode-563238-m02 sudo cat                                   | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-563238-m03_multinode-563238-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-563238 node stop m03                                                          | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	| node    | multinode-563238 node start                                                             | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-563238                                                                | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:11 UTC |                     |
	| stop    | -p multinode-563238                                                                     | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:11 UTC |                     |
	| start   | -p multinode-563238                                                                     | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:13 UTC | 20 May 24 11:16 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-563238                                                                | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:16 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:13:06
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:13:06.580258   40362 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:13:06.580529   40362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:13:06.580539   40362 out.go:304] Setting ErrFile to fd 2...
	I0520 11:13:06.580543   40362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:13:06.580708   40362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:13:06.581256   40362 out.go:298] Setting JSON to false
	I0520 11:13:06.582137   40362 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3330,"bootTime":1716200257,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:13:06.582195   40362 start.go:139] virtualization: kvm guest
	I0520 11:13:06.584352   40362 out.go:177] * [multinode-563238] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:13:06.585661   40362 notify.go:220] Checking for updates...
	I0520 11:13:06.585682   40362 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:13:06.587033   40362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:13:06.588302   40362 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:13:06.589537   40362 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:13:06.590764   40362 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:13:06.592138   40362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:13:06.593729   40362 config.go:182] Loaded profile config "multinode-563238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:13:06.593840   40362 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:13:06.594208   40362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:13:06.594262   40362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:13:06.609195   40362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42621
	I0520 11:13:06.609626   40362 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:13:06.610265   40362 main.go:141] libmachine: Using API Version  1
	I0520 11:13:06.610292   40362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:13:06.610710   40362 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:13:06.610951   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:13:06.645584   40362 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 11:13:06.646930   40362 start.go:297] selected driver: kvm2
	I0520 11:13:06.646948   40362 start.go:901] validating driver "kvm2" against &{Name:multinode-563238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-563238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:13:06.647091   40362 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:13:06.647463   40362 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:13:06.647560   40362 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:13:06.662193   40362 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:13:06.662886   40362 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:13:06.662918   40362 cni.go:84] Creating CNI manager for ""
	I0520 11:13:06.662930   40362 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0520 11:13:06.662991   40362 start.go:340] cluster config:
	{Name:multinode-563238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-563238 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:13:06.663126   40362 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:13:06.665828   40362 out.go:177] * Starting "multinode-563238" primary control-plane node in "multinode-563238" cluster
	I0520 11:13:06.666953   40362 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:13:06.666985   40362 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 11:13:06.666997   40362 cache.go:56] Caching tarball of preloaded images
	I0520 11:13:06.667073   40362 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:13:06.667085   40362 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 11:13:06.667216   40362 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/config.json ...
	I0520 11:13:06.667420   40362 start.go:360] acquireMachinesLock for multinode-563238: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:13:06.667473   40362 start.go:364] duration metric: took 33.362µs to acquireMachinesLock for "multinode-563238"
	I0520 11:13:06.667492   40362 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:13:06.667502   40362 fix.go:54] fixHost starting: 
	I0520 11:13:06.667754   40362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:13:06.667790   40362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:13:06.681455   40362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I0520 11:13:06.681886   40362 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:13:06.682372   40362 main.go:141] libmachine: Using API Version  1
	I0520 11:13:06.682400   40362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:13:06.682658   40362 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:13:06.682834   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:13:06.682984   40362 main.go:141] libmachine: (multinode-563238) Calling .GetState
	I0520 11:13:06.684407   40362 fix.go:112] recreateIfNeeded on multinode-563238: state=Running err=<nil>
	W0520 11:13:06.684439   40362 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:13:06.686100   40362 out.go:177] * Updating the running kvm2 "multinode-563238" VM ...
	I0520 11:13:06.687256   40362 machine.go:94] provisionDockerMachine start ...
	I0520 11:13:06.687270   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:13:06.687454   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:13:06.689884   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:06.690355   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:06.690372   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:06.690459   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:13:06.690614   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:06.690743   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:06.690881   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:13:06.691027   40362 main.go:141] libmachine: Using SSH client type: native
	I0520 11:13:06.691242   40362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0520 11:13:06.691254   40362 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:13:06.806394   40362 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-563238
	
	I0520 11:13:06.806424   40362 main.go:141] libmachine: (multinode-563238) Calling .GetMachineName
	I0520 11:13:06.806677   40362 buildroot.go:166] provisioning hostname "multinode-563238"
	I0520 11:13:06.806704   40362 main.go:141] libmachine: (multinode-563238) Calling .GetMachineName
	I0520 11:13:06.806883   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:13:06.809489   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:06.809840   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:06.809861   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:06.809973   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:13:06.810153   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:06.810331   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:06.810465   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:13:06.810621   40362 main.go:141] libmachine: Using SSH client type: native
	I0520 11:13:06.810837   40362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0520 11:13:06.810849   40362 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-563238 && echo "multinode-563238" | sudo tee /etc/hostname
	I0520 11:13:06.932778   40362 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-563238
	
	I0520 11:13:06.932817   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:13:06.935531   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:06.935893   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:06.935931   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:06.936082   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:13:06.936242   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:06.936418   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:06.936546   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:13:06.936677   40362 main.go:141] libmachine: Using SSH client type: native
	I0520 11:13:06.936837   40362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0520 11:13:06.936853   40362 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-563238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-563238/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-563238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:13:07.046173   40362 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:13:07.046207   40362 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:13:07.046234   40362 buildroot.go:174] setting up certificates
	I0520 11:13:07.046241   40362 provision.go:84] configureAuth start
	I0520 11:13:07.046249   40362 main.go:141] libmachine: (multinode-563238) Calling .GetMachineName
	I0520 11:13:07.046490   40362 main.go:141] libmachine: (multinode-563238) Calling .GetIP
	I0520 11:13:07.049140   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.049510   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:07.049537   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.049718   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:13:07.051745   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.052053   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:07.052074   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.052230   40362 provision.go:143] copyHostCerts
	I0520 11:13:07.052256   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:13:07.052283   40362 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:13:07.052298   40362 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:13:07.052377   40362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:13:07.052469   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:13:07.052487   40362 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:13:07.052491   40362 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:13:07.052516   40362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:13:07.052578   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:13:07.052596   40362 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:13:07.052603   40362 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:13:07.052626   40362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:13:07.052724   40362 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.multinode-563238 san=[127.0.0.1 192.168.39.145 localhost minikube multinode-563238]
	I0520 11:13:07.262000   40362 provision.go:177] copyRemoteCerts
	I0520 11:13:07.262053   40362 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:13:07.262075   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:13:07.264651   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.265029   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:07.265062   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.265193   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:13:07.265404   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:07.265568   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:13:07.265686   40362 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/multinode-563238/id_rsa Username:docker}
	I0520 11:13:07.351571   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 11:13:07.351637   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:13:07.381626   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 11:13:07.381693   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:13:07.406311   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 11:13:07.406375   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 11:13:07.431518   40362 provision.go:87] duration metric: took 385.265618ms to configureAuth
	I0520 11:13:07.431551   40362 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:13:07.431815   40362 config.go:182] Loaded profile config "multinode-563238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:13:07.431908   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:13:07.434370   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.434721   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:07.434751   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.434897   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:13:07.435076   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:07.435211   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:07.435329   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:13:07.435471   40362 main.go:141] libmachine: Using SSH client type: native
	I0520 11:13:07.435646   40362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0520 11:13:07.435660   40362 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:14:38.274132   40362 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:14:38.274162   40362 machine.go:97] duration metric: took 1m31.586896667s to provisionDockerMachine
	I0520 11:14:38.274173   40362 start.go:293] postStartSetup for "multinode-563238" (driver="kvm2")
	I0520 11:14:38.274183   40362 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:14:38.274198   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:14:38.274555   40362 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:14:38.274584   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:14:38.277536   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.277911   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:14:38.277931   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.278133   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:14:38.278317   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:14:38.278488   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:14:38.278631   40362 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/multinode-563238/id_rsa Username:docker}
	I0520 11:14:38.364425   40362 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:14:38.368757   40362 command_runner.go:130] > NAME=Buildroot
	I0520 11:14:38.368774   40362 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 11:14:38.368778   40362 command_runner.go:130] > ID=buildroot
	I0520 11:14:38.368783   40362 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 11:14:38.368788   40362 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 11:14:38.368920   40362 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:14:38.368956   40362 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:14:38.369035   40362 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:14:38.369125   40362 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:14:38.369137   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /etc/ssl/certs/111622.pem
	I0520 11:14:38.369220   40362 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:14:38.378382   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:14:38.403233   40362 start.go:296] duration metric: took 129.047244ms for postStartSetup
	I0520 11:14:38.403298   40362 fix.go:56] duration metric: took 1m31.735795776s for fixHost
	I0520 11:14:38.403328   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:14:38.405873   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.406306   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:14:38.406337   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.406476   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:14:38.406689   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:14:38.406844   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:14:38.406975   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:14:38.407121   40362 main.go:141] libmachine: Using SSH client type: native
	I0520 11:14:38.407321   40362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0520 11:14:38.407337   40362 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:14:38.514504   40362 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716203678.495711872
	
	I0520 11:14:38.514534   40362 fix.go:216] guest clock: 1716203678.495711872
	I0520 11:14:38.514543   40362 fix.go:229] Guest: 2024-05-20 11:14:38.495711872 +0000 UTC Remote: 2024-05-20 11:14:38.403306332 +0000 UTC m=+91.856185819 (delta=92.40554ms)
	I0520 11:14:38.514570   40362 fix.go:200] guest clock delta is within tolerance: 92.40554ms
	I0520 11:14:38.514579   40362 start.go:83] releasing machines lock for "multinode-563238", held for 1m31.847095704s
	I0520 11:14:38.514607   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:14:38.514898   40362 main.go:141] libmachine: (multinode-563238) Calling .GetIP
	I0520 11:14:38.517488   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.517891   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:14:38.517917   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.518073   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:14:38.518526   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:14:38.518694   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:14:38.518759   40362 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:14:38.518833   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:14:38.518941   40362 ssh_runner.go:195] Run: cat /version.json
	I0520 11:14:38.518966   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:14:38.521319   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.521416   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.521708   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:14:38.521733   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.521883   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:14:38.521899   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:14:38.521919   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.522096   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:14:38.522104   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:14:38.522264   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:14:38.522272   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:14:38.522407   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:14:38.522403   40362 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/multinode-563238/id_rsa Username:docker}
	I0520 11:14:38.522535   40362 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/multinode-563238/id_rsa Username:docker}
	I0520 11:14:38.597988   40362 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	W0520 11:14:38.598200   40362 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:14:38.598305   40362 ssh_runner.go:195] Run: systemctl --version
	I0520 11:14:38.621915   40362 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 11:14:38.622454   40362 command_runner.go:130] > systemd 252 (252)
	I0520 11:14:38.622483   40362 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 11:14:38.622535   40362 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:14:38.781416   40362 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 11:14:38.788944   40362 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 11:14:38.789213   40362 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:14:38.789265   40362 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:14:38.798884   40362 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 11:14:38.798906   40362 start.go:494] detecting cgroup driver to use...
	I0520 11:14:38.798953   40362 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:14:38.815581   40362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:14:38.829309   40362 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:14:38.829350   40362 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:14:38.842495   40362 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:14:38.856035   40362 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:14:38.997688   40362 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:14:39.135873   40362 docker.go:233] disabling docker service ...
	I0520 11:14:39.135942   40362 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:14:39.153541   40362 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:14:39.166895   40362 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:14:39.305230   40362 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:14:39.444348   40362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:14:39.458428   40362 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:14:39.476968   40362 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0520 11:14:39.477431   40362 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:14:39.477495   40362 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.487995   40362 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:14:39.488044   40362 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.498290   40362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.508606   40362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.518699   40362 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:14:39.529275   40362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.539244   40362 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.551876   40362 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.562545   40362 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:14:39.572208   40362 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 11:14:39.572260   40362 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:14:39.581762   40362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:14:39.744659   40362 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:14:43.214343   40362 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.469645141s)
	I0520 11:14:43.214380   40362 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:14:43.214425   40362 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:14:43.219563   40362 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0520 11:14:43.219589   40362 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 11:14:43.219597   40362 command_runner.go:130] > Device: 0,22	Inode: 1324        Links: 1
	I0520 11:14:43.219607   40362 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 11:14:43.219616   40362 command_runner.go:130] > Access: 2024-05-20 11:14:43.085564325 +0000
	I0520 11:14:43.219625   40362 command_runner.go:130] > Modify: 2024-05-20 11:14:43.085564325 +0000
	I0520 11:14:43.219633   40362 command_runner.go:130] > Change: 2024-05-20 11:14:43.085564325 +0000
	I0520 11:14:43.219638   40362 command_runner.go:130] >  Birth: -
	I0520 11:14:43.219961   40362 start.go:562] Will wait 60s for crictl version
	I0520 11:14:43.220019   40362 ssh_runner.go:195] Run: which crictl
	I0520 11:14:43.223850   40362 command_runner.go:130] > /usr/bin/crictl
	I0520 11:14:43.224026   40362 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:14:43.264030   40362 command_runner.go:130] > Version:  0.1.0
	I0520 11:14:43.264059   40362 command_runner.go:130] > RuntimeName:  cri-o
	I0520 11:14:43.264064   40362 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0520 11:14:43.264070   40362 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 11:14:43.265433   40362 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:14:43.265508   40362 ssh_runner.go:195] Run: crio --version
	I0520 11:14:43.293207   40362 command_runner.go:130] > crio version 1.29.1
	I0520 11:14:43.293236   40362 command_runner.go:130] > Version:        1.29.1
	I0520 11:14:43.293244   40362 command_runner.go:130] > GitCommit:      unknown
	I0520 11:14:43.293248   40362 command_runner.go:130] > GitCommitDate:  unknown
	I0520 11:14:43.293252   40362 command_runner.go:130] > GitTreeState:   clean
	I0520 11:14:43.293257   40362 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 11:14:43.293261   40362 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 11:14:43.293264   40362 command_runner.go:130] > Compiler:       gc
	I0520 11:14:43.293269   40362 command_runner.go:130] > Platform:       linux/amd64
	I0520 11:14:43.293277   40362 command_runner.go:130] > Linkmode:       dynamic
	I0520 11:14:43.293282   40362 command_runner.go:130] > BuildTags:      
	I0520 11:14:43.293286   40362 command_runner.go:130] >   containers_image_ostree_stub
	I0520 11:14:43.293292   40362 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 11:14:43.293299   40362 command_runner.go:130] >   btrfs_noversion
	I0520 11:14:43.293306   40362 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 11:14:43.293314   40362 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 11:14:43.293321   40362 command_runner.go:130] >   seccomp
	I0520 11:14:43.293328   40362 command_runner.go:130] > LDFlags:          unknown
	I0520 11:14:43.293339   40362 command_runner.go:130] > SeccompEnabled:   true
	I0520 11:14:43.293344   40362 command_runner.go:130] > AppArmorEnabled:  false
	I0520 11:14:43.294554   40362 ssh_runner.go:195] Run: crio --version
	I0520 11:14:43.323133   40362 command_runner.go:130] > crio version 1.29.1
	I0520 11:14:43.323151   40362 command_runner.go:130] > Version:        1.29.1
	I0520 11:14:43.323157   40362 command_runner.go:130] > GitCommit:      unknown
	I0520 11:14:43.323161   40362 command_runner.go:130] > GitCommitDate:  unknown
	I0520 11:14:43.323164   40362 command_runner.go:130] > GitTreeState:   clean
	I0520 11:14:43.323169   40362 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 11:14:43.323173   40362 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 11:14:43.323178   40362 command_runner.go:130] > Compiler:       gc
	I0520 11:14:43.323182   40362 command_runner.go:130] > Platform:       linux/amd64
	I0520 11:14:43.323186   40362 command_runner.go:130] > Linkmode:       dynamic
	I0520 11:14:43.323190   40362 command_runner.go:130] > BuildTags:      
	I0520 11:14:43.323194   40362 command_runner.go:130] >   containers_image_ostree_stub
	I0520 11:14:43.323198   40362 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 11:14:43.323204   40362 command_runner.go:130] >   btrfs_noversion
	I0520 11:14:43.323208   40362 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 11:14:43.323213   40362 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 11:14:43.323217   40362 command_runner.go:130] >   seccomp
	I0520 11:14:43.323221   40362 command_runner.go:130] > LDFlags:          unknown
	I0520 11:14:43.323226   40362 command_runner.go:130] > SeccompEnabled:   true
	I0520 11:14:43.323230   40362 command_runner.go:130] > AppArmorEnabled:  false
	I0520 11:14:43.325523   40362 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:14:43.327250   40362 main.go:141] libmachine: (multinode-563238) Calling .GetIP
	I0520 11:14:43.329639   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:43.330041   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:14:43.330072   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:43.330275   40362 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 11:14:43.334369   40362 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0520 11:14:43.334453   40362 kubeadm.go:877] updating cluster {Name:multinode-563238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-563238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:14:43.334575   40362 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:14:43.334614   40362 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:14:43.383753   40362 command_runner.go:130] > {
	I0520 11:14:43.383782   40362 command_runner.go:130] >   "images": [
	I0520 11:14:43.383789   40362 command_runner.go:130] >     {
	I0520 11:14:43.383809   40362 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 11:14:43.383816   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.383824   40362 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 11:14:43.383830   40362 command_runner.go:130] >       ],
	I0520 11:14:43.383835   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.383848   40362 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 11:14:43.383862   40362 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 11:14:43.383870   40362 command_runner.go:130] >       ],
	I0520 11:14:43.383880   40362 command_runner.go:130] >       "size": "65291810",
	I0520 11:14:43.383890   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.383897   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.383910   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.383919   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.383926   40362 command_runner.go:130] >     },
	I0520 11:14:43.383934   40362 command_runner.go:130] >     {
	I0520 11:14:43.383945   40362 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0520 11:14:43.383952   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.383961   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0520 11:14:43.383969   40362 command_runner.go:130] >       ],
	I0520 11:14:43.383976   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.383990   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0520 11:14:43.384005   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0520 11:14:43.384012   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384021   40362 command_runner.go:130] >       "size": "1363676",
	I0520 11:14:43.384032   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.384043   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.384048   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.384055   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.384064   40362 command_runner.go:130] >     },
	I0520 11:14:43.384070   40362 command_runner.go:130] >     {
	I0520 11:14:43.384083   40362 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 11:14:43.384093   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.384103   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 11:14:43.384111   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384118   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.384135   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 11:14:43.384151   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 11:14:43.384161   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384168   40362 command_runner.go:130] >       "size": "31470524",
	I0520 11:14:43.384177   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.384185   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.384194   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.384203   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.384210   40362 command_runner.go:130] >     },
	I0520 11:14:43.384217   40362 command_runner.go:130] >     {
	I0520 11:14:43.384231   40362 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 11:14:43.384240   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.384249   40362 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 11:14:43.384257   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384265   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.384280   40362 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 11:14:43.384301   40362 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 11:14:43.384310   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384318   40362 command_runner.go:130] >       "size": "61245718",
	I0520 11:14:43.384328   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.384339   40362 command_runner.go:130] >       "username": "nonroot",
	I0520 11:14:43.384348   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.384356   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.384363   40362 command_runner.go:130] >     },
	I0520 11:14:43.384371   40362 command_runner.go:130] >     {
	I0520 11:14:43.384383   40362 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 11:14:43.384392   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.384400   40362 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 11:14:43.384409   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384417   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.384431   40362 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 11:14:43.384446   40362 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 11:14:43.384454   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384461   40362 command_runner.go:130] >       "size": "150779692",
	I0520 11:14:43.384471   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.384479   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.384488   40362 command_runner.go:130] >       },
	I0520 11:14:43.384495   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.384505   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.384513   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.384522   40362 command_runner.go:130] >     },
	I0520 11:14:43.384528   40362 command_runner.go:130] >     {
	I0520 11:14:43.384539   40362 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 11:14:43.384550   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.384559   40362 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 11:14:43.384569   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384577   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.384593   40362 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 11:14:43.384608   40362 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 11:14:43.384617   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384624   40362 command_runner.go:130] >       "size": "117601759",
	I0520 11:14:43.384634   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.384641   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.384650   40362 command_runner.go:130] >       },
	I0520 11:14:43.384657   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.384667   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.384677   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.384683   40362 command_runner.go:130] >     },
	I0520 11:14:43.384691   40362 command_runner.go:130] >     {
	I0520 11:14:43.384702   40362 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 11:14:43.384712   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.384724   40362 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 11:14:43.384732   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384740   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.384756   40362 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 11:14:43.384773   40362 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 11:14:43.384781   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384788   40362 command_runner.go:130] >       "size": "112170310",
	I0520 11:14:43.384802   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.384812   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.384819   40362 command_runner.go:130] >       },
	I0520 11:14:43.384827   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.384837   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.384846   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.384852   40362 command_runner.go:130] >     },
	I0520 11:14:43.384860   40362 command_runner.go:130] >     {
	I0520 11:14:43.384871   40362 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 11:14:43.384881   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.384891   40362 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 11:14:43.384900   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384908   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.384947   40362 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 11:14:43.384960   40362 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 11:14:43.384965   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384970   40362 command_runner.go:130] >       "size": "85933465",
	I0520 11:14:43.384975   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.384992   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.384998   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.385005   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.385010   40362 command_runner.go:130] >     },
	I0520 11:14:43.385016   40362 command_runner.go:130] >     {
	I0520 11:14:43.385026   40362 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 11:14:43.385034   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.385042   40362 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 11:14:43.385048   40362 command_runner.go:130] >       ],
	I0520 11:14:43.385055   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.385066   40362 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 11:14:43.385078   40362 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 11:14:43.385083   40362 command_runner.go:130] >       ],
	I0520 11:14:43.385090   40362 command_runner.go:130] >       "size": "63026504",
	I0520 11:14:43.385097   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.385108   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.385115   40362 command_runner.go:130] >       },
	I0520 11:14:43.385125   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.385131   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.385138   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.385146   40362 command_runner.go:130] >     },
	I0520 11:14:43.385153   40362 command_runner.go:130] >     {
	I0520 11:14:43.385167   40362 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 11:14:43.385180   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.385191   40362 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 11:14:43.385198   40362 command_runner.go:130] >       ],
	I0520 11:14:43.385207   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.385220   40362 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 11:14:43.385235   40362 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 11:14:43.385244   40362 command_runner.go:130] >       ],
	I0520 11:14:43.385252   40362 command_runner.go:130] >       "size": "750414",
	I0520 11:14:43.385263   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.385273   40362 command_runner.go:130] >         "value": "65535"
	I0520 11:14:43.385282   40362 command_runner.go:130] >       },
	I0520 11:14:43.385290   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.385297   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.385305   40362 command_runner.go:130] >       "pinned": true
	I0520 11:14:43.385311   40362 command_runner.go:130] >     }
	I0520 11:14:43.385318   40362 command_runner.go:130] >   ]
	I0520 11:14:43.385324   40362 command_runner.go:130] > }
	I0520 11:14:43.386127   40362 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:14:43.386146   40362 crio.go:433] Images already preloaded, skipping extraction
	I0520 11:14:43.386208   40362 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:14:43.419456   40362 command_runner.go:130] > {
	I0520 11:14:43.419475   40362 command_runner.go:130] >   "images": [
	I0520 11:14:43.419479   40362 command_runner.go:130] >     {
	I0520 11:14:43.419486   40362 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 11:14:43.419492   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.419497   40362 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 11:14:43.419502   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419506   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.419542   40362 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 11:14:43.419562   40362 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 11:14:43.419568   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419577   40362 command_runner.go:130] >       "size": "65291810",
	I0520 11:14:43.419581   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.419585   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.419590   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.419597   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.419602   40362 command_runner.go:130] >     },
	I0520 11:14:43.419606   40362 command_runner.go:130] >     {
	I0520 11:14:43.419615   40362 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0520 11:14:43.419626   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.419638   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0520 11:14:43.419648   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419657   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.419671   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0520 11:14:43.419683   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0520 11:14:43.419690   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419694   40362 command_runner.go:130] >       "size": "1363676",
	I0520 11:14:43.419703   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.419717   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.419727   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.419737   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.419745   40362 command_runner.go:130] >     },
	I0520 11:14:43.419753   40362 command_runner.go:130] >     {
	I0520 11:14:43.419767   40362 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 11:14:43.419776   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.419783   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 11:14:43.419792   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419802   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.419818   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 11:14:43.419834   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 11:14:43.419843   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419853   40362 command_runner.go:130] >       "size": "31470524",
	I0520 11:14:43.419862   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.419870   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.419878   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.419885   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.419894   40362 command_runner.go:130] >     },
	I0520 11:14:43.419900   40362 command_runner.go:130] >     {
	I0520 11:14:43.419913   40362 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 11:14:43.419923   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.419934   40362 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 11:14:43.419943   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419954   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.419966   40362 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 11:14:43.419983   40362 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 11:14:43.419993   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420001   40362 command_runner.go:130] >       "size": "61245718",
	I0520 11:14:43.420011   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.420021   40362 command_runner.go:130] >       "username": "nonroot",
	I0520 11:14:43.420030   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420040   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.420048   40362 command_runner.go:130] >     },
	I0520 11:14:43.420062   40362 command_runner.go:130] >     {
	I0520 11:14:43.420071   40362 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 11:14:43.420081   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.420092   40362 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 11:14:43.420101   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420111   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.420125   40362 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 11:14:43.420138   40362 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 11:14:43.420147   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420156   40362 command_runner.go:130] >       "size": "150779692",
	I0520 11:14:43.420163   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.420169   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.420177   40362 command_runner.go:130] >       },
	I0520 11:14:43.420188   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.420195   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420205   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.420213   40362 command_runner.go:130] >     },
	I0520 11:14:43.420220   40362 command_runner.go:130] >     {
	I0520 11:14:43.420232   40362 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 11:14:43.420247   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.420256   40362 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 11:14:43.420262   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420269   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.420285   40362 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 11:14:43.420300   40362 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 11:14:43.420310   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420321   40362 command_runner.go:130] >       "size": "117601759",
	I0520 11:14:43.420330   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.420339   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.420348   40362 command_runner.go:130] >       },
	I0520 11:14:43.420357   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.420364   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420369   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.420377   40362 command_runner.go:130] >     },
	I0520 11:14:43.420386   40362 command_runner.go:130] >     {
	I0520 11:14:43.420397   40362 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 11:14:43.420406   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.420418   40362 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 11:14:43.420426   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420436   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.420466   40362 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 11:14:43.420486   40362 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 11:14:43.420491   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420497   40362 command_runner.go:130] >       "size": "112170310",
	I0520 11:14:43.420510   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.420517   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.420526   40362 command_runner.go:130] >       },
	I0520 11:14:43.420533   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.420542   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420549   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.420557   40362 command_runner.go:130] >     },
	I0520 11:14:43.420562   40362 command_runner.go:130] >     {
	I0520 11:14:43.420573   40362 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 11:14:43.420579   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.420584   40362 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 11:14:43.420590   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420595   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.420611   40362 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 11:14:43.420625   40362 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 11:14:43.420631   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420642   40362 command_runner.go:130] >       "size": "85933465",
	I0520 11:14:43.420648   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.420659   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.420669   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420678   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.420687   40362 command_runner.go:130] >     },
	I0520 11:14:43.420695   40362 command_runner.go:130] >     {
	I0520 11:14:43.420709   40362 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 11:14:43.420718   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.420729   40362 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 11:14:43.420738   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420744   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.420758   40362 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 11:14:43.420770   40362 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 11:14:43.420776   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420780   40362 command_runner.go:130] >       "size": "63026504",
	I0520 11:14:43.420786   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.420790   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.420796   40362 command_runner.go:130] >       },
	I0520 11:14:43.420800   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.420805   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420812   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.420815   40362 command_runner.go:130] >     },
	I0520 11:14:43.420820   40362 command_runner.go:130] >     {
	I0520 11:14:43.420825   40362 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 11:14:43.420831   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.420836   40362 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 11:14:43.420842   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420846   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.420855   40362 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 11:14:43.420864   40362 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 11:14:43.420870   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420874   40362 command_runner.go:130] >       "size": "750414",
	I0520 11:14:43.420880   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.420884   40362 command_runner.go:130] >         "value": "65535"
	I0520 11:14:43.420891   40362 command_runner.go:130] >       },
	I0520 11:14:43.420895   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.420901   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420912   40362 command_runner.go:130] >       "pinned": true
	I0520 11:14:43.420918   40362 command_runner.go:130] >     }
	I0520 11:14:43.420921   40362 command_runner.go:130] >   ]
	I0520 11:14:43.420943   40362 command_runner.go:130] > }
	I0520 11:14:43.421071   40362 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:14:43.421081   40362 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:14:43.421089   40362 kubeadm.go:928] updating node { 192.168.39.145 8443 v1.30.1 crio true true} ...
	I0520 11:14:43.421189   40362 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-563238 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-563238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:14:43.421250   40362 ssh_runner.go:195] Run: crio config
	I0520 11:14:43.453637   40362 command_runner.go:130] ! time="2024-05-20 11:14:43.434932160Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0520 11:14:43.460085   40362 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0520 11:14:43.464280   40362 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0520 11:14:43.464314   40362 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0520 11:14:43.464325   40362 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0520 11:14:43.464330   40362 command_runner.go:130] > #
	I0520 11:14:43.464340   40362 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0520 11:14:43.464352   40362 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0520 11:14:43.464363   40362 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0520 11:14:43.464383   40362 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0520 11:14:43.464390   40362 command_runner.go:130] > # reload'.
	I0520 11:14:43.464401   40362 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0520 11:14:43.464415   40362 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0520 11:14:43.464427   40362 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0520 11:14:43.464441   40362 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0520 11:14:43.464447   40362 command_runner.go:130] > [crio]
	I0520 11:14:43.464458   40362 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0520 11:14:43.464472   40362 command_runner.go:130] > # containers images, in this directory.
	I0520 11:14:43.464480   40362 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0520 11:14:43.464496   40362 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0520 11:14:43.464506   40362 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0520 11:14:43.464520   40362 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0520 11:14:43.464530   40362 command_runner.go:130] > # imagestore = ""
	I0520 11:14:43.464542   40362 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0520 11:14:43.464556   40362 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0520 11:14:43.464567   40362 command_runner.go:130] > storage_driver = "overlay"
	I0520 11:14:43.464577   40362 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0520 11:14:43.464590   40362 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0520 11:14:43.464599   40362 command_runner.go:130] > storage_option = [
	I0520 11:14:43.464610   40362 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0520 11:14:43.464618   40362 command_runner.go:130] > ]
	I0520 11:14:43.464630   40362 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0520 11:14:43.464643   40362 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0520 11:14:43.464652   40362 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0520 11:14:43.464662   40362 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0520 11:14:43.464676   40362 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0520 11:14:43.464686   40362 command_runner.go:130] > # always happen on a node reboot
	I0520 11:14:43.464695   40362 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0520 11:14:43.464711   40362 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0520 11:14:43.464725   40362 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0520 11:14:43.464736   40362 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0520 11:14:43.464745   40362 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0520 11:14:43.464761   40362 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0520 11:14:43.464777   40362 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0520 11:14:43.464787   40362 command_runner.go:130] > # internal_wipe = true
	I0520 11:14:43.464801   40362 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0520 11:14:43.464813   40362 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0520 11:14:43.464827   40362 command_runner.go:130] > # internal_repair = false
	I0520 11:14:43.464839   40362 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0520 11:14:43.464852   40362 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0520 11:14:43.464865   40362 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0520 11:14:43.464876   40362 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0520 11:14:43.464890   40362 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0520 11:14:43.464899   40362 command_runner.go:130] > [crio.api]
	I0520 11:14:43.464908   40362 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0520 11:14:43.464919   40362 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0520 11:14:43.464942   40362 command_runner.go:130] > # IP address on which the stream server will listen.
	I0520 11:14:43.464953   40362 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0520 11:14:43.464967   40362 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0520 11:14:43.464998   40362 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0520 11:14:43.465008   40362 command_runner.go:130] > # stream_port = "0"
	I0520 11:14:43.465017   40362 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0520 11:14:43.465025   40362 command_runner.go:130] > # stream_enable_tls = false
	I0520 11:14:43.465036   40362 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0520 11:14:43.465046   40362 command_runner.go:130] > # stream_idle_timeout = ""
	I0520 11:14:43.465059   40362 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0520 11:14:43.465072   40362 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0520 11:14:43.465081   40362 command_runner.go:130] > # minutes.
	I0520 11:14:43.465088   40362 command_runner.go:130] > # stream_tls_cert = ""
	I0520 11:14:43.465100   40362 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0520 11:14:43.465111   40362 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0520 11:14:43.465120   40362 command_runner.go:130] > # stream_tls_key = ""
	I0520 11:14:43.465133   40362 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0520 11:14:43.465146   40362 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0520 11:14:43.465168   40362 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0520 11:14:43.465178   40362 command_runner.go:130] > # stream_tls_ca = ""
	I0520 11:14:43.465192   40362 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 11:14:43.465203   40362 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0520 11:14:43.465216   40362 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 11:14:43.465227   40362 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0520 11:14:43.465240   40362 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0520 11:14:43.465253   40362 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0520 11:14:43.465263   40362 command_runner.go:130] > [crio.runtime]
	I0520 11:14:43.465275   40362 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0520 11:14:43.465285   40362 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0520 11:14:43.465294   40362 command_runner.go:130] > # "nofile=1024:2048"
	I0520 11:14:43.465306   40362 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0520 11:14:43.465316   40362 command_runner.go:130] > # default_ulimits = [
	I0520 11:14:43.465323   40362 command_runner.go:130] > # ]
	I0520 11:14:43.465334   40362 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0520 11:14:43.465343   40362 command_runner.go:130] > # no_pivot = false
	I0520 11:14:43.465355   40362 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0520 11:14:43.465368   40362 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0520 11:14:43.465376   40362 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0520 11:14:43.465389   40362 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0520 11:14:43.465402   40362 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0520 11:14:43.465416   40362 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 11:14:43.465427   40362 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0520 11:14:43.465438   40362 command_runner.go:130] > # Cgroup setting for conmon
	I0520 11:14:43.465453   40362 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0520 11:14:43.465462   40362 command_runner.go:130] > conmon_cgroup = "pod"
	I0520 11:14:43.465473   40362 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0520 11:14:43.465485   40362 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0520 11:14:43.465499   40362 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 11:14:43.465508   40362 command_runner.go:130] > conmon_env = [
	I0520 11:14:43.465519   40362 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 11:14:43.465527   40362 command_runner.go:130] > ]
	I0520 11:14:43.465537   40362 command_runner.go:130] > # Additional environment variables to set for all the
	I0520 11:14:43.465548   40362 command_runner.go:130] > # containers. These are overridden if set in the
	I0520 11:14:43.465563   40362 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0520 11:14:43.465572   40362 command_runner.go:130] > # default_env = [
	I0520 11:14:43.465578   40362 command_runner.go:130] > # ]
	I0520 11:14:43.465591   40362 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0520 11:14:43.465607   40362 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0520 11:14:43.465616   40362 command_runner.go:130] > # selinux = false
	I0520 11:14:43.465627   40362 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0520 11:14:43.465640   40362 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0520 11:14:43.465653   40362 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0520 11:14:43.465663   40362 command_runner.go:130] > # seccomp_profile = ""
	I0520 11:14:43.465676   40362 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0520 11:14:43.465688   40362 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0520 11:14:43.465699   40362 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0520 11:14:43.465709   40362 command_runner.go:130] > # which might increase security.
	I0520 11:14:43.465717   40362 command_runner.go:130] > # This option is currently deprecated,
	I0520 11:14:43.465730   40362 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0520 11:14:43.465741   40362 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0520 11:14:43.465754   40362 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0520 11:14:43.465768   40362 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0520 11:14:43.465782   40362 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0520 11:14:43.465795   40362 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0520 11:14:43.465806   40362 command_runner.go:130] > # This option supports live configuration reload.
	I0520 11:14:43.465818   40362 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0520 11:14:43.465836   40362 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0520 11:14:43.465847   40362 command_runner.go:130] > # the cgroup blockio controller.
	I0520 11:14:43.465855   40362 command_runner.go:130] > # blockio_config_file = ""
	I0520 11:14:43.465870   40362 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0520 11:14:43.465879   40362 command_runner.go:130] > # blockio parameters.
	I0520 11:14:43.465886   40362 command_runner.go:130] > # blockio_reload = false
	I0520 11:14:43.465900   40362 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0520 11:14:43.465909   40362 command_runner.go:130] > # irqbalance daemon.
	I0520 11:14:43.465919   40362 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0520 11:14:43.465932   40362 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0520 11:14:43.465947   40362 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0520 11:14:43.465961   40362 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0520 11:14:43.465974   40362 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0520 11:14:43.465986   40362 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0520 11:14:43.465998   40362 command_runner.go:130] > # This option supports live configuration reload.
	I0520 11:14:43.466009   40362 command_runner.go:130] > # rdt_config_file = ""
	I0520 11:14:43.466021   40362 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0520 11:14:43.466032   40362 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0520 11:14:43.466057   40362 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0520 11:14:43.466067   40362 command_runner.go:130] > # separate_pull_cgroup = ""
	I0520 11:14:43.466077   40362 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0520 11:14:43.466091   40362 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0520 11:14:43.466099   40362 command_runner.go:130] > # will be added.
	I0520 11:14:43.466107   40362 command_runner.go:130] > # default_capabilities = [
	I0520 11:14:43.466116   40362 command_runner.go:130] > # 	"CHOWN",
	I0520 11:14:43.466124   40362 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0520 11:14:43.466133   40362 command_runner.go:130] > # 	"FSETID",
	I0520 11:14:43.466141   40362 command_runner.go:130] > # 	"FOWNER",
	I0520 11:14:43.466149   40362 command_runner.go:130] > # 	"SETGID",
	I0520 11:14:43.466156   40362 command_runner.go:130] > # 	"SETUID",
	I0520 11:14:43.466163   40362 command_runner.go:130] > # 	"SETPCAP",
	I0520 11:14:43.466172   40362 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0520 11:14:43.466181   40362 command_runner.go:130] > # 	"KILL",
	I0520 11:14:43.466187   40362 command_runner.go:130] > # ]
	I0520 11:14:43.466201   40362 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0520 11:14:43.466216   40362 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0520 11:14:43.466228   40362 command_runner.go:130] > # add_inheritable_capabilities = false
	I0520 11:14:43.466241   40362 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0520 11:14:43.466253   40362 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 11:14:43.466263   40362 command_runner.go:130] > default_sysctls = [
	I0520 11:14:43.466272   40362 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0520 11:14:43.466281   40362 command_runner.go:130] > ]
	I0520 11:14:43.466289   40362 command_runner.go:130] > # List of devices on the host that a
	I0520 11:14:43.466303   40362 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0520 11:14:43.466313   40362 command_runner.go:130] > # allowed_devices = [
	I0520 11:14:43.466321   40362 command_runner.go:130] > # 	"/dev/fuse",
	I0520 11:14:43.466327   40362 command_runner.go:130] > # ]
	I0520 11:14:43.466337   40362 command_runner.go:130] > # List of additional devices. specified as
	I0520 11:14:43.466352   40362 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0520 11:14:43.466364   40362 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0520 11:14:43.466377   40362 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 11:14:43.466387   40362 command_runner.go:130] > # additional_devices = [
	I0520 11:14:43.466396   40362 command_runner.go:130] > # ]
	I0520 11:14:43.466405   40362 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0520 11:14:43.466414   40362 command_runner.go:130] > # cdi_spec_dirs = [
	I0520 11:14:43.466421   40362 command_runner.go:130] > # 	"/etc/cdi",
	I0520 11:14:43.466431   40362 command_runner.go:130] > # 	"/var/run/cdi",
	I0520 11:14:43.466437   40362 command_runner.go:130] > # ]
	I0520 11:14:43.466449   40362 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0520 11:14:43.466462   40362 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0520 11:14:43.466472   40362 command_runner.go:130] > # Defaults to false.
	I0520 11:14:43.466484   40362 command_runner.go:130] > # device_ownership_from_security_context = false
	I0520 11:14:43.466497   40362 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0520 11:14:43.466509   40362 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0520 11:14:43.466516   40362 command_runner.go:130] > # hooks_dir = [
	I0520 11:14:43.466527   40362 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0520 11:14:43.466534   40362 command_runner.go:130] > # ]
	I0520 11:14:43.466545   40362 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0520 11:14:43.466558   40362 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0520 11:14:43.466569   40362 command_runner.go:130] > # its default mounts from the following two files:
	I0520 11:14:43.466578   40362 command_runner.go:130] > #
	I0520 11:14:43.466590   40362 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0520 11:14:43.466603   40362 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0520 11:14:43.466613   40362 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0520 11:14:43.466621   40362 command_runner.go:130] > #
	I0520 11:14:43.466632   40362 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0520 11:14:43.466646   40362 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0520 11:14:43.466660   40362 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0520 11:14:43.466672   40362 command_runner.go:130] > #      only add mounts it finds in this file.
	I0520 11:14:43.466680   40362 command_runner.go:130] > #
	I0520 11:14:43.466688   40362 command_runner.go:130] > # default_mounts_file = ""
	I0520 11:14:43.466700   40362 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0520 11:14:43.466714   40362 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0520 11:14:43.466722   40362 command_runner.go:130] > pids_limit = 1024
	I0520 11:14:43.466733   40362 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0520 11:14:43.466746   40362 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0520 11:14:43.466760   40362 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0520 11:14:43.466776   40362 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0520 11:14:43.466786   40362 command_runner.go:130] > # log_size_max = -1
	I0520 11:14:43.466801   40362 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0520 11:14:43.466811   40362 command_runner.go:130] > # log_to_journald = false
	I0520 11:14:43.466829   40362 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0520 11:14:43.466840   40362 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0520 11:14:43.466849   40362 command_runner.go:130] > # Path to directory for container attach sockets.
	I0520 11:14:43.466860   40362 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0520 11:14:43.466870   40362 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0520 11:14:43.466880   40362 command_runner.go:130] > # bind_mount_prefix = ""
	I0520 11:14:43.466893   40362 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0520 11:14:43.466902   40362 command_runner.go:130] > # read_only = false
	I0520 11:14:43.466913   40362 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0520 11:14:43.466926   40362 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0520 11:14:43.466937   40362 command_runner.go:130] > # live configuration reload.
	I0520 11:14:43.466944   40362 command_runner.go:130] > # log_level = "info"
	I0520 11:14:43.466955   40362 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0520 11:14:43.466967   40362 command_runner.go:130] > # This option supports live configuration reload.
	I0520 11:14:43.466980   40362 command_runner.go:130] > # log_filter = ""
	I0520 11:14:43.466993   40362 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0520 11:14:43.467009   40362 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0520 11:14:43.467018   40362 command_runner.go:130] > # separated by comma.
	I0520 11:14:43.467031   40362 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 11:14:43.467041   40362 command_runner.go:130] > # uid_mappings = ""
	I0520 11:14:43.467052   40362 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0520 11:14:43.467065   40362 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0520 11:14:43.467075   40362 command_runner.go:130] > # separated by comma.
	I0520 11:14:43.467088   40362 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 11:14:43.467096   40362 command_runner.go:130] > # gid_mappings = ""
	I0520 11:14:43.467106   40362 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0520 11:14:43.467120   40362 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 11:14:43.467133   40362 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 11:14:43.467148   40362 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 11:14:43.467159   40362 command_runner.go:130] > # minimum_mappable_uid = -1
	I0520 11:14:43.467173   40362 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0520 11:14:43.467184   40362 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 11:14:43.467194   40362 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 11:14:43.467207   40362 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 11:14:43.467217   40362 command_runner.go:130] > # minimum_mappable_gid = -1
	I0520 11:14:43.467231   40362 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0520 11:14:43.467245   40362 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0520 11:14:43.467257   40362 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0520 11:14:43.467267   40362 command_runner.go:130] > # ctr_stop_timeout = 30
	I0520 11:14:43.467278   40362 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0520 11:14:43.467291   40362 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0520 11:14:43.467303   40362 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0520 11:14:43.467312   40362 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0520 11:14:43.467321   40362 command_runner.go:130] > drop_infra_ctr = false
	I0520 11:14:43.467334   40362 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0520 11:14:43.467346   40362 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0520 11:14:43.467363   40362 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0520 11:14:43.467373   40362 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0520 11:14:43.467388   40362 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0520 11:14:43.467400   40362 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0520 11:14:43.467414   40362 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0520 11:14:43.467425   40362 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0520 11:14:43.467436   40362 command_runner.go:130] > # shared_cpuset = ""
	I0520 11:14:43.467449   40362 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0520 11:14:43.467461   40362 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0520 11:14:43.467471   40362 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0520 11:14:43.467486   40362 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0520 11:14:43.467497   40362 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0520 11:14:43.467507   40362 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0520 11:14:43.467520   40362 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0520 11:14:43.467530   40362 command_runner.go:130] > # enable_criu_support = false
	I0520 11:14:43.467542   40362 command_runner.go:130] > # Enable/disable the generation of the container,
	I0520 11:14:43.467555   40362 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0520 11:14:43.467566   40362 command_runner.go:130] > # enable_pod_events = false
	I0520 11:14:43.467580   40362 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 11:14:43.467593   40362 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 11:14:43.467604   40362 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0520 11:14:43.467612   40362 command_runner.go:130] > # default_runtime = "runc"
	I0520 11:14:43.467623   40362 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0520 11:14:43.467638   40362 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0520 11:14:43.467656   40362 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0520 11:14:43.467668   40362 command_runner.go:130] > # creation as a file is not desired either.
	I0520 11:14:43.467685   40362 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0520 11:14:43.467695   40362 command_runner.go:130] > # the hostname is being managed dynamically.
	I0520 11:14:43.467702   40362 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0520 11:14:43.467711   40362 command_runner.go:130] > # ]
	I0520 11:14:43.467722   40362 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0520 11:14:43.467736   40362 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0520 11:14:43.467749   40362 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0520 11:14:43.467761   40362 command_runner.go:130] > # Each entry in the table should follow the format:
	I0520 11:14:43.467770   40362 command_runner.go:130] > #
	I0520 11:14:43.467779   40362 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0520 11:14:43.467789   40362 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0520 11:14:43.467812   40362 command_runner.go:130] > # runtime_type = "oci"
	I0520 11:14:43.467827   40362 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0520 11:14:43.467839   40362 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0520 11:14:43.467850   40362 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0520 11:14:43.467861   40362 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0520 11:14:43.467873   40362 command_runner.go:130] > # monitor_env = []
	I0520 11:14:43.467884   40362 command_runner.go:130] > # privileged_without_host_devices = false
	I0520 11:14:43.467892   40362 command_runner.go:130] > # allowed_annotations = []
	I0520 11:14:43.467905   40362 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0520 11:14:43.467914   40362 command_runner.go:130] > # Where:
	I0520 11:14:43.467924   40362 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0520 11:14:43.467938   40362 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0520 11:14:43.467952   40362 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0520 11:14:43.467966   40362 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0520 11:14:43.467975   40362 command_runner.go:130] > #   in $PATH.
	I0520 11:14:43.467987   40362 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0520 11:14:43.467998   40362 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0520 11:14:43.468010   40362 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0520 11:14:43.468018   40362 command_runner.go:130] > #   state.
	I0520 11:14:43.468030   40362 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0520 11:14:43.468042   40362 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0520 11:14:43.468054   40362 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0520 11:14:43.468066   40362 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0520 11:14:43.468079   40362 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0520 11:14:43.468094   40362 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0520 11:14:43.468105   40362 command_runner.go:130] > #   The currently recognized values are:
	I0520 11:14:43.468117   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0520 11:14:43.468132   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0520 11:14:43.468145   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0520 11:14:43.468158   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0520 11:14:43.468173   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0520 11:14:43.468187   40362 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0520 11:14:43.468201   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0520 11:14:43.468215   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0520 11:14:43.468228   40362 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0520 11:14:43.468241   40362 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0520 11:14:43.468252   40362 command_runner.go:130] > #   deprecated option "conmon".
	I0520 11:14:43.468264   40362 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0520 11:14:43.468275   40362 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0520 11:14:43.468289   40362 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0520 11:14:43.468300   40362 command_runner.go:130] > #   should be moved to the container's cgroup
	I0520 11:14:43.468312   40362 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0520 11:14:43.468324   40362 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0520 11:14:43.468338   40362 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0520 11:14:43.468350   40362 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0520 11:14:43.468358   40362 command_runner.go:130] > #
	I0520 11:14:43.468366   40362 command_runner.go:130] > # Using the seccomp notifier feature:
	I0520 11:14:43.468374   40362 command_runner.go:130] > #
	I0520 11:14:43.468385   40362 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0520 11:14:43.468398   40362 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0520 11:14:43.468406   40362 command_runner.go:130] > #
	I0520 11:14:43.468417   40362 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0520 11:14:43.468431   40362 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0520 11:14:43.468440   40362 command_runner.go:130] > #
	I0520 11:14:43.468450   40362 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0520 11:14:43.468459   40362 command_runner.go:130] > # feature.
	I0520 11:14:43.468465   40362 command_runner.go:130] > #
	I0520 11:14:43.468476   40362 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0520 11:14:43.468490   40362 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0520 11:14:43.468504   40362 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0520 11:14:43.468517   40362 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0520 11:14:43.468530   40362 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0520 11:14:43.468538   40362 command_runner.go:130] > #
	I0520 11:14:43.468549   40362 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0520 11:14:43.468562   40362 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0520 11:14:43.468570   40362 command_runner.go:130] > #
	I0520 11:14:43.468582   40362 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0520 11:14:43.468595   40362 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0520 11:14:43.468603   40362 command_runner.go:130] > #
	I0520 11:14:43.468613   40362 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0520 11:14:43.468625   40362 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0520 11:14:43.468632   40362 command_runner.go:130] > # limitation.
	I0520 11:14:43.468643   40362 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0520 11:14:43.468654   40362 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0520 11:14:43.468663   40362 command_runner.go:130] > runtime_type = "oci"
	I0520 11:14:43.468671   40362 command_runner.go:130] > runtime_root = "/run/runc"
	I0520 11:14:43.468682   40362 command_runner.go:130] > runtime_config_path = ""
	I0520 11:14:43.468693   40362 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0520 11:14:43.468704   40362 command_runner.go:130] > monitor_cgroup = "pod"
	I0520 11:14:43.468713   40362 command_runner.go:130] > monitor_exec_cgroup = ""
	I0520 11:14:43.468722   40362 command_runner.go:130] > monitor_env = [
	I0520 11:14:43.468732   40362 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 11:14:43.468740   40362 command_runner.go:130] > ]
	I0520 11:14:43.468749   40362 command_runner.go:130] > privileged_without_host_devices = false
	I0520 11:14:43.468764   40362 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0520 11:14:43.468777   40362 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0520 11:14:43.468790   40362 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0520 11:14:43.468803   40362 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0520 11:14:43.468819   40362 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0520 11:14:43.468834   40362 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0520 11:14:43.468852   40362 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0520 11:14:43.468868   40362 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0520 11:14:43.468880   40362 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0520 11:14:43.468894   40362 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0520 11:14:43.468903   40362 command_runner.go:130] > # Example:
	I0520 11:14:43.468912   40362 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0520 11:14:43.468923   40362 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0520 11:14:43.468948   40362 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0520 11:14:43.468957   40362 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0520 11:14:43.468967   40362 command_runner.go:130] > # cpuset = 0
	I0520 11:14:43.468974   40362 command_runner.go:130] > # cpushares = "0-1"
	I0520 11:14:43.468983   40362 command_runner.go:130] > # Where:
	I0520 11:14:43.468991   40362 command_runner.go:130] > # The workload name is workload-type.
	I0520 11:14:43.469007   40362 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0520 11:14:43.469019   40362 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0520 11:14:43.469031   40362 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0520 11:14:43.469045   40362 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0520 11:14:43.469058   40362 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0520 11:14:43.469069   40362 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0520 11:14:43.469084   40362 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0520 11:14:43.469095   40362 command_runner.go:130] > # Default value is set to true
	I0520 11:14:43.469106   40362 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0520 11:14:43.469113   40362 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0520 11:14:43.469120   40362 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0520 11:14:43.469126   40362 command_runner.go:130] > # Default value is set to 'false'
	I0520 11:14:43.469133   40362 command_runner.go:130] > # disable_hostport_mapping = false
	I0520 11:14:43.469143   40362 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0520 11:14:43.469148   40362 command_runner.go:130] > #
	I0520 11:14:43.469158   40362 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0520 11:14:43.469167   40362 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0520 11:14:43.469177   40362 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0520 11:14:43.469187   40362 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0520 11:14:43.469196   40362 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0520 11:14:43.469202   40362 command_runner.go:130] > [crio.image]
	I0520 11:14:43.469212   40362 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0520 11:14:43.469219   40362 command_runner.go:130] > # default_transport = "docker://"
	I0520 11:14:43.469229   40362 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0520 11:14:43.469239   40362 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0520 11:14:43.469246   40362 command_runner.go:130] > # global_auth_file = ""
	I0520 11:14:43.469255   40362 command_runner.go:130] > # The image used to instantiate infra containers.
	I0520 11:14:43.469263   40362 command_runner.go:130] > # This option supports live configuration reload.
	I0520 11:14:43.469272   40362 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0520 11:14:43.469283   40362 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0520 11:14:43.469293   40362 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0520 11:14:43.469301   40362 command_runner.go:130] > # This option supports live configuration reload.
	I0520 11:14:43.469308   40362 command_runner.go:130] > # pause_image_auth_file = ""
	I0520 11:14:43.469317   40362 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0520 11:14:43.469331   40362 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0520 11:14:43.469344   40362 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0520 11:14:43.469355   40362 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0520 11:14:43.469364   40362 command_runner.go:130] > # pause_command = "/pause"
	I0520 11:14:43.469376   40362 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0520 11:14:43.469390   40362 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0520 11:14:43.469403   40362 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0520 11:14:43.469420   40362 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0520 11:14:43.469433   40362 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0520 11:14:43.469445   40362 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0520 11:14:43.469454   40362 command_runner.go:130] > # pinned_images = [
	I0520 11:14:43.469462   40362 command_runner.go:130] > # ]
	I0520 11:14:43.469478   40362 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0520 11:14:43.469492   40362 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0520 11:14:43.469505   40362 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0520 11:14:43.469518   40362 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0520 11:14:43.469527   40362 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0520 11:14:43.469536   40362 command_runner.go:130] > # signature_policy = ""
	I0520 11:14:43.469546   40362 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0520 11:14:43.469560   40362 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0520 11:14:43.469574   40362 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0520 11:14:43.469587   40362 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0520 11:14:43.469600   40362 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0520 11:14:43.469612   40362 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0520 11:14:43.469626   40362 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0520 11:14:43.469639   40362 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0520 11:14:43.469648   40362 command_runner.go:130] > # changing them here.
	I0520 11:14:43.469657   40362 command_runner.go:130] > # insecure_registries = [
	I0520 11:14:43.469665   40362 command_runner.go:130] > # ]
	I0520 11:14:43.469677   40362 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0520 11:14:43.469688   40362 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0520 11:14:43.469695   40362 command_runner.go:130] > # image_volumes = "mkdir"
	I0520 11:14:43.469707   40362 command_runner.go:130] > # Temporary directory to use for storing big files
	I0520 11:14:43.469718   40362 command_runner.go:130] > # big_files_temporary_dir = ""
	I0520 11:14:43.469731   40362 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0520 11:14:43.469740   40362 command_runner.go:130] > # CNI plugins.
	I0520 11:14:43.469749   40362 command_runner.go:130] > [crio.network]
	I0520 11:14:43.469762   40362 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0520 11:14:43.469773   40362 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0520 11:14:43.469780   40362 command_runner.go:130] > # cni_default_network = ""
	I0520 11:14:43.469792   40362 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0520 11:14:43.469803   40362 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0520 11:14:43.469812   40362 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0520 11:14:43.469827   40362 command_runner.go:130] > # plugin_dirs = [
	I0520 11:14:43.469838   40362 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0520 11:14:43.469846   40362 command_runner.go:130] > # ]
	I0520 11:14:43.469857   40362 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0520 11:14:43.469866   40362 command_runner.go:130] > [crio.metrics]
	I0520 11:14:43.469875   40362 command_runner.go:130] > # Globally enable or disable metrics support.
	I0520 11:14:43.469885   40362 command_runner.go:130] > enable_metrics = true
	I0520 11:14:43.469894   40362 command_runner.go:130] > # Specify enabled metrics collectors.
	I0520 11:14:43.469905   40362 command_runner.go:130] > # Per default all metrics are enabled.
	I0520 11:14:43.469920   40362 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0520 11:14:43.469934   40362 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0520 11:14:43.469947   40362 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0520 11:14:43.469957   40362 command_runner.go:130] > # metrics_collectors = [
	I0520 11:14:43.469963   40362 command_runner.go:130] > # 	"operations",
	I0520 11:14:43.469975   40362 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0520 11:14:43.469986   40362 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0520 11:14:43.469994   40362 command_runner.go:130] > # 	"operations_errors",
	I0520 11:14:43.470004   40362 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0520 11:14:43.470014   40362 command_runner.go:130] > # 	"image_pulls_by_name",
	I0520 11:14:43.470024   40362 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0520 11:14:43.470033   40362 command_runner.go:130] > # 	"image_pulls_failures",
	I0520 11:14:43.470042   40362 command_runner.go:130] > # 	"image_pulls_successes",
	I0520 11:14:43.470050   40362 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0520 11:14:43.470060   40362 command_runner.go:130] > # 	"image_layer_reuse",
	I0520 11:14:43.470071   40362 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0520 11:14:43.470079   40362 command_runner.go:130] > # 	"containers_oom_total",
	I0520 11:14:43.470087   40362 command_runner.go:130] > # 	"containers_oom",
	I0520 11:14:43.470095   40362 command_runner.go:130] > # 	"processes_defunct",
	I0520 11:14:43.470105   40362 command_runner.go:130] > # 	"operations_total",
	I0520 11:14:43.470114   40362 command_runner.go:130] > # 	"operations_latency_seconds",
	I0520 11:14:43.470124   40362 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0520 11:14:43.470131   40362 command_runner.go:130] > # 	"operations_errors_total",
	I0520 11:14:43.470142   40362 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0520 11:14:43.470150   40362 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0520 11:14:43.470161   40362 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0520 11:14:43.470172   40362 command_runner.go:130] > # 	"image_pulls_success_total",
	I0520 11:14:43.470182   40362 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0520 11:14:43.470193   40362 command_runner.go:130] > # 	"containers_oom_count_total",
	I0520 11:14:43.470203   40362 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0520 11:14:43.470212   40362 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0520 11:14:43.470218   40362 command_runner.go:130] > # ]
	I0520 11:14:43.470231   40362 command_runner.go:130] > # The port on which the metrics server will listen.
	I0520 11:14:43.470241   40362 command_runner.go:130] > # metrics_port = 9090
	I0520 11:14:43.470253   40362 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0520 11:14:43.470264   40362 command_runner.go:130] > # metrics_socket = ""
	I0520 11:14:43.470275   40362 command_runner.go:130] > # The certificate for the secure metrics server.
	I0520 11:14:43.470288   40362 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0520 11:14:43.470301   40362 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0520 11:14:43.470310   40362 command_runner.go:130] > # certificate on any modification event.
	I0520 11:14:43.470319   40362 command_runner.go:130] > # metrics_cert = ""
	I0520 11:14:43.470331   40362 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0520 11:14:43.470343   40362 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0520 11:14:43.470370   40362 command_runner.go:130] > # metrics_key = ""
	I0520 11:14:43.470387   40362 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0520 11:14:43.470395   40362 command_runner.go:130] > [crio.tracing]
	I0520 11:14:43.470406   40362 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0520 11:14:43.470416   40362 command_runner.go:130] > # enable_tracing = false
	I0520 11:14:43.470426   40362 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0520 11:14:43.470437   40362 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0520 11:14:43.470452   40362 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0520 11:14:43.470463   40362 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0520 11:14:43.470470   40362 command_runner.go:130] > # CRI-O NRI configuration.
	I0520 11:14:43.470476   40362 command_runner.go:130] > [crio.nri]
	I0520 11:14:43.470487   40362 command_runner.go:130] > # Globally enable or disable NRI.
	I0520 11:14:43.470498   40362 command_runner.go:130] > # enable_nri = false
	I0520 11:14:43.470509   40362 command_runner.go:130] > # NRI socket to listen on.
	I0520 11:14:43.470519   40362 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0520 11:14:43.470529   40362 command_runner.go:130] > # NRI plugin directory to use.
	I0520 11:14:43.470538   40362 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0520 11:14:43.470549   40362 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0520 11:14:43.470559   40362 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0520 11:14:43.470569   40362 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0520 11:14:43.470579   40362 command_runner.go:130] > # nri_disable_connections = false
	I0520 11:14:43.470591   40362 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0520 11:14:43.470602   40362 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0520 11:14:43.470612   40362 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0520 11:14:43.470623   40362 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0520 11:14:43.470635   40362 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0520 11:14:43.470644   40362 command_runner.go:130] > [crio.stats]
	I0520 11:14:43.470654   40362 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0520 11:14:43.470666   40362 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0520 11:14:43.470677   40362 command_runner.go:130] > # stats_collection_period = 0
	I0520 11:14:43.470784   40362 cni.go:84] Creating CNI manager for ""
	I0520 11:14:43.470795   40362 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0520 11:14:43.470816   40362 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:14:43.470861   40362 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-563238 NodeName:multinode-563238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:14:43.471017   40362 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-563238"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:14:43.471095   40362 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:14:43.481399   40362 command_runner.go:130] > kubeadm
	I0520 11:14:43.481418   40362 command_runner.go:130] > kubectl
	I0520 11:14:43.481422   40362 command_runner.go:130] > kubelet
	I0520 11:14:43.481439   40362 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:14:43.481481   40362 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:14:43.491181   40362 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0520 11:14:43.508199   40362 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:14:43.525975   40362 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0520 11:14:43.543034   40362 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I0520 11:14:43.546837   40362 command_runner.go:130] > 192.168.39.145	control-plane.minikube.internal
	I0520 11:14:43.547053   40362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:14:43.684078   40362 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:14:43.698917   40362 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238 for IP: 192.168.39.145
	I0520 11:14:43.698933   40362 certs.go:194] generating shared ca certs ...
	I0520 11:14:43.698946   40362 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:14:43.699086   40362 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:14:43.699131   40362 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:14:43.699147   40362 certs.go:256] generating profile certs ...
	I0520 11:14:43.699229   40362 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/client.key
	I0520 11:14:43.699284   40362 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/apiserver.key.3a25a639
	I0520 11:14:43.699317   40362 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/proxy-client.key
	I0520 11:14:43.699327   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 11:14:43.699338   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 11:14:43.699350   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 11:14:43.699360   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 11:14:43.699371   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 11:14:43.699382   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 11:14:43.699395   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 11:14:43.699407   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 11:14:43.699455   40362 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:14:43.699480   40362 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:14:43.699491   40362 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:14:43.699512   40362 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:14:43.699532   40362 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:14:43.699552   40362 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:14:43.699589   40362 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:14:43.699612   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /usr/share/ca-certificates/111622.pem
	I0520 11:14:43.699625   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:43.699637   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem -> /usr/share/ca-certificates/11162.pem
	I0520 11:14:43.700175   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:14:43.725101   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:14:43.748352   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:14:43.771646   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:14:43.795061   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 11:14:43.818572   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 11:14:43.842377   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:14:43.866164   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:14:43.890072   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:14:43.913034   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:14:43.937294   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:14:43.960645   40362 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:14:43.976966   40362 ssh_runner.go:195] Run: openssl version
	I0520 11:14:43.982559   40362 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 11:14:43.982620   40362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:14:43.993355   40362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:43.997532   40362 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:43.997555   40362 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:43.997590   40362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:44.003123   40362 command_runner.go:130] > b5213941
	I0520 11:14:44.003185   40362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:14:44.012371   40362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:14:44.023068   40362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:14:44.027287   40362 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:14:44.027372   40362 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:14:44.027411   40362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:14:44.032717   40362 command_runner.go:130] > 51391683
	I0520 11:14:44.032935   40362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:14:44.042227   40362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:14:44.052948   40362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:14:44.057171   40362 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:14:44.057200   40362 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:14:44.057236   40362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:14:44.062434   40362 command_runner.go:130] > 3ec20f2e
	I0520 11:14:44.062636   40362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:14:44.072113   40362 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:14:44.076448   40362 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:14:44.076469   40362 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0520 11:14:44.076475   40362 command_runner.go:130] > Device: 253,1	Inode: 6292502     Links: 1
	I0520 11:14:44.076482   40362 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 11:14:44.076490   40362 command_runner.go:130] > Access: 2024-05-20 11:08:04.253830910 +0000
	I0520 11:14:44.076498   40362 command_runner.go:130] > Modify: 2024-05-20 11:08:04.253830910 +0000
	I0520 11:14:44.076506   40362 command_runner.go:130] > Change: 2024-05-20 11:08:04.253830910 +0000
	I0520 11:14:44.076519   40362 command_runner.go:130] >  Birth: 2024-05-20 11:08:04.253830910 +0000
	I0520 11:14:44.076559   40362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:14:44.082134   40362 command_runner.go:130] > Certificate will not expire
	I0520 11:14:44.082259   40362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:14:44.087568   40362 command_runner.go:130] > Certificate will not expire
	I0520 11:14:44.087695   40362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:14:44.093079   40362 command_runner.go:130] > Certificate will not expire
	I0520 11:14:44.093207   40362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:14:44.098322   40362 command_runner.go:130] > Certificate will not expire
	I0520 11:14:44.098446   40362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:14:44.103727   40362 command_runner.go:130] > Certificate will not expire
	I0520 11:14:44.103802   40362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:14:44.109103   40362 command_runner.go:130] > Certificate will not expire
	I0520 11:14:44.109213   40362 kubeadm.go:391] StartCluster: {Name:multinode-563238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-563238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:14:44.109335   40362 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:14:44.109384   40362 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:14:44.145192   40362 command_runner.go:130] > 5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402
	I0520 11:14:44.145214   40362 command_runner.go:130] > 9649bc3306d6ea51630bb34ed9d37baddd52422170963d71a56cd26cdd3e04a2
	I0520 11:14:44.145231   40362 command_runner.go:130] > e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654
	I0520 11:14:44.145241   40362 command_runner.go:130] > 3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879
	I0520 11:14:44.145251   40362 command_runner.go:130] > 59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2
	I0520 11:14:44.145261   40362 command_runner.go:130] > d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c
	I0520 11:14:44.145270   40362 command_runner.go:130] > 28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0
	I0520 11:14:44.145277   40362 command_runner.go:130] > c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9
	I0520 11:14:44.146421   40362 cri.go:89] found id: "5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402"
	I0520 11:14:44.146436   40362 cri.go:89] found id: "9649bc3306d6ea51630bb34ed9d37baddd52422170963d71a56cd26cdd3e04a2"
	I0520 11:14:44.146442   40362 cri.go:89] found id: "e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654"
	I0520 11:14:44.146448   40362 cri.go:89] found id: "3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879"
	I0520 11:14:44.146453   40362 cri.go:89] found id: "59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2"
	I0520 11:14:44.146457   40362 cri.go:89] found id: "d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c"
	I0520 11:14:44.146460   40362 cri.go:89] found id: "28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0"
	I0520 11:14:44.146463   40362 cri.go:89] found id: "c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9"
	I0520 11:14:44.146465   40362 cri.go:89] found id: ""
	I0520 11:14:44.146501   40362 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.251864898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716203767251840127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=980cce4d-5faf-4669-b922-f17f8ee7ebbf name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.252460902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d5b8234-8352-4dd5-a1d2-43e7aee6d09c name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.252539761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d5b8234-8352-4dd5-a1d2-43e7aee6d09c name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.253016525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27bca91f6d6816c9959c2c629ab65565e722b25dddf61844604d7098bd480aa0,PodSandboxId:10f73ac2ef4b9bdf6f60ce979ac0f611d48885ca6cc18d05b6c96c084ba01dd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716203724184038955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73c505331db817f002b06ffea8ee1f58448aacf73c532887d85f826a03485c,PodSandboxId:e8981df7ed0d267f17954e028200b7d00c9ac97d81c58c44bd0f10b3f74c472a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716203692290716775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397337b81f6ac7360b0b3ad1100eb29a9fa3356d7b0e278157270ea471949011,PodSandboxId:5f0f94851dba625251a0d9ff237088973640163ab34bb8b0572bf5ee5af8087a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716203692192838472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5a
df9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39aa15086d2d3e41c85d2d745833aa96e969f5e427873f658f886879d8f98195,PodSandboxId:8b9c34bf7b1ec47989a7ac1deafea0e59ae0a67e7852974e232dcd1834f3f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716203692072545085,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da-218ad474da19,},Annotations:map[string]
string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6a556f3371d804ff261e3f386ce16f04680c53afa96f5237f2f5576e8ed9a3,PodSandboxId:58cddfbaf13ab2bb21ef45d46dd93ae00c07387e375df0cd869fe6d810671f14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716203691986911632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.ku
bernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df66636af8e0435313062c7dcd67889a0f8843ed590fdfecd182d64ddeab1524,PodSandboxId:ffe9e8a28237fceb3584590c6030ec7d926bae35a8f15451e2a841e6f5949707,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716203686647008895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string{io.kubernetes.container.hash: 35b42501,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdd7e83e2d02167020ebbe5af1a67b7f788b7093c8cef267a78b7bfcb189d68,PodSandboxId:653f49e6e9517cba6b2d6923b8ef2607b46cea0b43a0eaf7463a54c86b30865d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716203686691432273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f840bd971547b956c8859cc8caf2a99d30b8a5a0cca4b22a581fc69a4a511631,PodSandboxId:a9fc672468d0aafd1c25e6cb7884e2e483aa6a20badd42deb968afc7a1236e12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716203686602887580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c7ac35dfae92cb27f8fd0ba826b1c7ef672880a5116e94273794063caa1f0,PodSandboxId:bf729013c521c3d3491bb35331a77a43f252529948bc6262da14887992ecc36f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716203686595885844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.container.hash: aaab6ac8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb869c60e3b08c929ead0eab0e8e6acf6e76bb5ed30cecc72e1745dfc1d7b840,PodSandboxId:866315da0199034276dff82a6c28af3d3a4708f2c743dbd5ee90d2f0ea5e22d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716203382299672670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402,PodSandboxId:5200ecfdd57944a83d31e85c4c052190a52e47a8eb59980607de60a1c80bbba2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716203339846982887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9649bc3306d6ea51630bb34ed9d37baddd52422170963d71a56cd26cdd3e04a2,PodSandboxId:799981de6efef87f15cf4dc78612363bb24f2b163f5bf125b046dc10a19323d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716203339780704507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654,PodSandboxId:eca11eca800adeb0983642ce0296006091be4e1dd4d6c98407febb988f47c19e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716203308227468897,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5adf9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879,PodSandboxId:5d8102a8a077616ff77e8787db352dad712c72091b6bb590d72e5cf1725568df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716203308092071705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da
-218ad474da19,},Annotations:map[string]string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2,PodSandboxId:a95aa0d2462886d9abb6c74ba6138aa26d75ea07dc869315aee42ed10bfc9fc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716203287848561777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string
{io.kubernetes.container.hash: 35b42501,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c,PodSandboxId:6cd079f4b2e872172db22658bca8cb4e922c06d6ddb0b5da3c758bba2e866290,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716203287787479863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.
container.hash: aaab6ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0,PodSandboxId:8417013f2e4984baaa842c0c2ccd710641bcaaa3c8c0bcc4cf75a9d529813d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716203287740754560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9,PodSandboxId:0b4e20bdb7345c40ccb72cb7ee63d5635a8b3f2b23d1c75f2590c9ab38b9bf91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716203287718944430,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d5b8234-8352-4dd5-a1d2-43e7aee6d09c name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.297606194Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a4b453f-3ef7-475b-82cd-51e6278d0a12 name=/runtime.v1.RuntimeService/Version
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.297679283Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a4b453f-3ef7-475b-82cd-51e6278d0a12 name=/runtime.v1.RuntimeService/Version
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.299664985Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40cafe89-efb1-48f6-a3da-b1d9cb9c265c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.300074350Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716203767300045675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40cafe89-efb1-48f6-a3da-b1d9cb9c265c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.300725631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=918d1aa8-f60f-4638-a2b5-21224cc17258 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.300786580Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=918d1aa8-f60f-4638-a2b5-21224cc17258 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.303709496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27bca91f6d6816c9959c2c629ab65565e722b25dddf61844604d7098bd480aa0,PodSandboxId:10f73ac2ef4b9bdf6f60ce979ac0f611d48885ca6cc18d05b6c96c084ba01dd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716203724184038955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73c505331db817f002b06ffea8ee1f58448aacf73c532887d85f826a03485c,PodSandboxId:e8981df7ed0d267f17954e028200b7d00c9ac97d81c58c44bd0f10b3f74c472a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716203692290716775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397337b81f6ac7360b0b3ad1100eb29a9fa3356d7b0e278157270ea471949011,PodSandboxId:5f0f94851dba625251a0d9ff237088973640163ab34bb8b0572bf5ee5af8087a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716203692192838472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5a
df9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39aa15086d2d3e41c85d2d745833aa96e969f5e427873f658f886879d8f98195,PodSandboxId:8b9c34bf7b1ec47989a7ac1deafea0e59ae0a67e7852974e232dcd1834f3f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716203692072545085,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da-218ad474da19,},Annotations:map[string]
string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6a556f3371d804ff261e3f386ce16f04680c53afa96f5237f2f5576e8ed9a3,PodSandboxId:58cddfbaf13ab2bb21ef45d46dd93ae00c07387e375df0cd869fe6d810671f14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716203691986911632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.ku
bernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df66636af8e0435313062c7dcd67889a0f8843ed590fdfecd182d64ddeab1524,PodSandboxId:ffe9e8a28237fceb3584590c6030ec7d926bae35a8f15451e2a841e6f5949707,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716203686647008895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string{io.kubernetes.container.hash: 35b42501,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdd7e83e2d02167020ebbe5af1a67b7f788b7093c8cef267a78b7bfcb189d68,PodSandboxId:653f49e6e9517cba6b2d6923b8ef2607b46cea0b43a0eaf7463a54c86b30865d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716203686691432273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f840bd971547b956c8859cc8caf2a99d30b8a5a0cca4b22a581fc69a4a511631,PodSandboxId:a9fc672468d0aafd1c25e6cb7884e2e483aa6a20badd42deb968afc7a1236e12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716203686602887580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c7ac35dfae92cb27f8fd0ba826b1c7ef672880a5116e94273794063caa1f0,PodSandboxId:bf729013c521c3d3491bb35331a77a43f252529948bc6262da14887992ecc36f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716203686595885844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.container.hash: aaab6ac8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb869c60e3b08c929ead0eab0e8e6acf6e76bb5ed30cecc72e1745dfc1d7b840,PodSandboxId:866315da0199034276dff82a6c28af3d3a4708f2c743dbd5ee90d2f0ea5e22d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716203382299672670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402,PodSandboxId:5200ecfdd57944a83d31e85c4c052190a52e47a8eb59980607de60a1c80bbba2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716203339846982887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9649bc3306d6ea51630bb34ed9d37baddd52422170963d71a56cd26cdd3e04a2,PodSandboxId:799981de6efef87f15cf4dc78612363bb24f2b163f5bf125b046dc10a19323d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716203339780704507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654,PodSandboxId:eca11eca800adeb0983642ce0296006091be4e1dd4d6c98407febb988f47c19e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716203308227468897,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5adf9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879,PodSandboxId:5d8102a8a077616ff77e8787db352dad712c72091b6bb590d72e5cf1725568df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716203308092071705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da
-218ad474da19,},Annotations:map[string]string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2,PodSandboxId:a95aa0d2462886d9abb6c74ba6138aa26d75ea07dc869315aee42ed10bfc9fc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716203287848561777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string
{io.kubernetes.container.hash: 35b42501,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c,PodSandboxId:6cd079f4b2e872172db22658bca8cb4e922c06d6ddb0b5da3c758bba2e866290,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716203287787479863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.
container.hash: aaab6ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0,PodSandboxId:8417013f2e4984baaa842c0c2ccd710641bcaaa3c8c0bcc4cf75a9d529813d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716203287740754560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9,PodSandboxId:0b4e20bdb7345c40ccb72cb7ee63d5635a8b3f2b23d1c75f2590c9ab38b9bf91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716203287718944430,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=918d1aa8-f60f-4638-a2b5-21224cc17258 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.344582063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cff1fada-1411-4ef9-87d2-8df23e6bb5f5 name=/runtime.v1.RuntimeService/Version
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.344656352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cff1fada-1411-4ef9-87d2-8df23e6bb5f5 name=/runtime.v1.RuntimeService/Version
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.346050601Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6b7dc7e-8c4e-4ed7-8263-7ee7dca755b7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.346550993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716203767346516867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6b7dc7e-8c4e-4ed7-8263-7ee7dca755b7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.347136954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebae7132-a2d1-48bc-a192-b98de0722bcd name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.347241564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebae7132-a2d1-48bc-a192-b98de0722bcd name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.347825676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27bca91f6d6816c9959c2c629ab65565e722b25dddf61844604d7098bd480aa0,PodSandboxId:10f73ac2ef4b9bdf6f60ce979ac0f611d48885ca6cc18d05b6c96c084ba01dd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716203724184038955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73c505331db817f002b06ffea8ee1f58448aacf73c532887d85f826a03485c,PodSandboxId:e8981df7ed0d267f17954e028200b7d00c9ac97d81c58c44bd0f10b3f74c472a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716203692290716775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397337b81f6ac7360b0b3ad1100eb29a9fa3356d7b0e278157270ea471949011,PodSandboxId:5f0f94851dba625251a0d9ff237088973640163ab34bb8b0572bf5ee5af8087a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716203692192838472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5a
df9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39aa15086d2d3e41c85d2d745833aa96e969f5e427873f658f886879d8f98195,PodSandboxId:8b9c34bf7b1ec47989a7ac1deafea0e59ae0a67e7852974e232dcd1834f3f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716203692072545085,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da-218ad474da19,},Annotations:map[string]
string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6a556f3371d804ff261e3f386ce16f04680c53afa96f5237f2f5576e8ed9a3,PodSandboxId:58cddfbaf13ab2bb21ef45d46dd93ae00c07387e375df0cd869fe6d810671f14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716203691986911632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.ku
bernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df66636af8e0435313062c7dcd67889a0f8843ed590fdfecd182d64ddeab1524,PodSandboxId:ffe9e8a28237fceb3584590c6030ec7d926bae35a8f15451e2a841e6f5949707,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716203686647008895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string{io.kubernetes.container.hash: 35b42501,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdd7e83e2d02167020ebbe5af1a67b7f788b7093c8cef267a78b7bfcb189d68,PodSandboxId:653f49e6e9517cba6b2d6923b8ef2607b46cea0b43a0eaf7463a54c86b30865d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716203686691432273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f840bd971547b956c8859cc8caf2a99d30b8a5a0cca4b22a581fc69a4a511631,PodSandboxId:a9fc672468d0aafd1c25e6cb7884e2e483aa6a20badd42deb968afc7a1236e12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716203686602887580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c7ac35dfae92cb27f8fd0ba826b1c7ef672880a5116e94273794063caa1f0,PodSandboxId:bf729013c521c3d3491bb35331a77a43f252529948bc6262da14887992ecc36f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716203686595885844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.container.hash: aaab6ac8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb869c60e3b08c929ead0eab0e8e6acf6e76bb5ed30cecc72e1745dfc1d7b840,PodSandboxId:866315da0199034276dff82a6c28af3d3a4708f2c743dbd5ee90d2f0ea5e22d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716203382299672670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402,PodSandboxId:5200ecfdd57944a83d31e85c4c052190a52e47a8eb59980607de60a1c80bbba2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716203339846982887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9649bc3306d6ea51630bb34ed9d37baddd52422170963d71a56cd26cdd3e04a2,PodSandboxId:799981de6efef87f15cf4dc78612363bb24f2b163f5bf125b046dc10a19323d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716203339780704507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654,PodSandboxId:eca11eca800adeb0983642ce0296006091be4e1dd4d6c98407febb988f47c19e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716203308227468897,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5adf9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879,PodSandboxId:5d8102a8a077616ff77e8787db352dad712c72091b6bb590d72e5cf1725568df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716203308092071705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da
-218ad474da19,},Annotations:map[string]string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2,PodSandboxId:a95aa0d2462886d9abb6c74ba6138aa26d75ea07dc869315aee42ed10bfc9fc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716203287848561777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string
{io.kubernetes.container.hash: 35b42501,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c,PodSandboxId:6cd079f4b2e872172db22658bca8cb4e922c06d6ddb0b5da3c758bba2e866290,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716203287787479863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.
container.hash: aaab6ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0,PodSandboxId:8417013f2e4984baaa842c0c2ccd710641bcaaa3c8c0bcc4cf75a9d529813d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716203287740754560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9,PodSandboxId:0b4e20bdb7345c40ccb72cb7ee63d5635a8b3f2b23d1c75f2590c9ab38b9bf91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716203287718944430,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebae7132-a2d1-48bc-a192-b98de0722bcd name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.396628239Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=198a626d-ab8a-4122-a975-ce7a77e5733f name=/runtime.v1.RuntimeService/Version
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.396698680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=198a626d-ab8a-4122-a975-ce7a77e5733f name=/runtime.v1.RuntimeService/Version
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.398099127Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a0b72a8-6bf5-4b9b-9d67-7ac17442e337 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.399000004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716203767398974119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a0b72a8-6bf5-4b9b-9d67-7ac17442e337 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.399807906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c29b8d04-a93f-4a9e-9a1d-407b985351c9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.399863981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c29b8d04-a93f-4a9e-9a1d-407b985351c9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:16:07 multinode-563238 crio[2862]: time="2024-05-20 11:16:07.400196413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27bca91f6d6816c9959c2c629ab65565e722b25dddf61844604d7098bd480aa0,PodSandboxId:10f73ac2ef4b9bdf6f60ce979ac0f611d48885ca6cc18d05b6c96c084ba01dd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716203724184038955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73c505331db817f002b06ffea8ee1f58448aacf73c532887d85f826a03485c,PodSandboxId:e8981df7ed0d267f17954e028200b7d00c9ac97d81c58c44bd0f10b3f74c472a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716203692290716775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397337b81f6ac7360b0b3ad1100eb29a9fa3356d7b0e278157270ea471949011,PodSandboxId:5f0f94851dba625251a0d9ff237088973640163ab34bb8b0572bf5ee5af8087a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716203692192838472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5a
df9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39aa15086d2d3e41c85d2d745833aa96e969f5e427873f658f886879d8f98195,PodSandboxId:8b9c34bf7b1ec47989a7ac1deafea0e59ae0a67e7852974e232dcd1834f3f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716203692072545085,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da-218ad474da19,},Annotations:map[string]
string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6a556f3371d804ff261e3f386ce16f04680c53afa96f5237f2f5576e8ed9a3,PodSandboxId:58cddfbaf13ab2bb21ef45d46dd93ae00c07387e375df0cd869fe6d810671f14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716203691986911632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.ku
bernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df66636af8e0435313062c7dcd67889a0f8843ed590fdfecd182d64ddeab1524,PodSandboxId:ffe9e8a28237fceb3584590c6030ec7d926bae35a8f15451e2a841e6f5949707,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716203686647008895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string{io.kubernetes.container.hash: 35b42501,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdd7e83e2d02167020ebbe5af1a67b7f788b7093c8cef267a78b7bfcb189d68,PodSandboxId:653f49e6e9517cba6b2d6923b8ef2607b46cea0b43a0eaf7463a54c86b30865d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716203686691432273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f840bd971547b956c8859cc8caf2a99d30b8a5a0cca4b22a581fc69a4a511631,PodSandboxId:a9fc672468d0aafd1c25e6cb7884e2e483aa6a20badd42deb968afc7a1236e12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716203686602887580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c7ac35dfae92cb27f8fd0ba826b1c7ef672880a5116e94273794063caa1f0,PodSandboxId:bf729013c521c3d3491bb35331a77a43f252529948bc6262da14887992ecc36f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716203686595885844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.container.hash: aaab6ac8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb869c60e3b08c929ead0eab0e8e6acf6e76bb5ed30cecc72e1745dfc1d7b840,PodSandboxId:866315da0199034276dff82a6c28af3d3a4708f2c743dbd5ee90d2f0ea5e22d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716203382299672670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402,PodSandboxId:5200ecfdd57944a83d31e85c4c052190a52e47a8eb59980607de60a1c80bbba2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716203339846982887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9649bc3306d6ea51630bb34ed9d37baddd52422170963d71a56cd26cdd3e04a2,PodSandboxId:799981de6efef87f15cf4dc78612363bb24f2b163f5bf125b046dc10a19323d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716203339780704507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654,PodSandboxId:eca11eca800adeb0983642ce0296006091be4e1dd4d6c98407febb988f47c19e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716203308227468897,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5adf9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879,PodSandboxId:5d8102a8a077616ff77e8787db352dad712c72091b6bb590d72e5cf1725568df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716203308092071705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da
-218ad474da19,},Annotations:map[string]string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2,PodSandboxId:a95aa0d2462886d9abb6c74ba6138aa26d75ea07dc869315aee42ed10bfc9fc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716203287848561777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string
{io.kubernetes.container.hash: 35b42501,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c,PodSandboxId:6cd079f4b2e872172db22658bca8cb4e922c06d6ddb0b5da3c758bba2e866290,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716203287787479863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.
container.hash: aaab6ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0,PodSandboxId:8417013f2e4984baaa842c0c2ccd710641bcaaa3c8c0bcc4cf75a9d529813d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716203287740754560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9,PodSandboxId:0b4e20bdb7345c40ccb72cb7ee63d5635a8b3f2b23d1c75f2590c9ab38b9bf91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716203287718944430,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c29b8d04-a93f-4a9e-9a1d-407b985351c9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	27bca91f6d681       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      43 seconds ago       Running             busybox                   1                   10f73ac2ef4b9       busybox-fc5497c4f-twghk
	3c73c505331db       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   e8981df7ed0d2       coredns-7db6d8ff4d-6pmxg
	397337b81f6ac       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   5f0f94851dba6       kindnet-cwg9g
	39aa15086d2d3       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      About a minute ago   Running             kube-proxy                1                   8b9c34bf7b1ec       kube-proxy-k8lrj
	2f6a556f3371d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   58cddfbaf13ab       storage-provisioner
	0bdd7e83e2d02       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      About a minute ago   Running             kube-scheduler            1                   653f49e6e9517       kube-scheduler-multinode-563238
	df66636af8e04       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   ffe9e8a28237f       etcd-multinode-563238
	f840bd971547b       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   1                   a9fc672468d0a       kube-controller-manager-multinode-563238
	b22c7ac35dfae       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            1                   bf729013c521c       kube-apiserver-multinode-563238
	eb869c60e3b08       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   866315da01990       busybox-fc5497c4f-twghk
	5d6855192c829       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   5200ecfdd5794       coredns-7db6d8ff4d-6pmxg
	9649bc3306d6e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   799981de6efef       storage-provisioner
	e46ea94884403       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   eca11eca800ad       kindnet-cwg9g
	3db01e7963ed1       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago        Exited              kube-proxy                0                   5d8102a8a0776       kube-proxy-k8lrj
	59a147db91ff7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   a95aa0d246288       etcd-multinode-563238
	d833ed2f82deb       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago        Exited              kube-apiserver            0                   6cd079f4b2e87       kube-apiserver-multinode-563238
	28e64d121b441       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago        Exited              kube-controller-manager   0                   8417013f2e498       kube-controller-manager-multinode-563238
	c1063cae37531       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago        Exited              kube-scheduler            0                   0b4e20bdb7345       kube-scheduler-multinode-563238
	
	
	==> coredns [3c73c505331db817f002b06ffea8ee1f58448aacf73c532887d85f826a03485c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48727 - 31337 "HINFO IN 8412042946722979635.2088388643646220200. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015903138s
	
	
	==> coredns [5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402] <==
	[INFO] 10.244.0.3:51937 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001728176s
	[INFO] 10.244.0.3:56550 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061854s
	[INFO] 10.244.0.3:45509 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00003092s
	[INFO] 10.244.0.3:35501 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000934994s
	[INFO] 10.244.0.3:54862 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106239s
	[INFO] 10.244.0.3:39157 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046208s
	[INFO] 10.244.0.3:50994 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029828s
	[INFO] 10.244.1.2:44704 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144582s
	[INFO] 10.244.1.2:39127 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136518s
	[INFO] 10.244.1.2:59901 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125581s
	[INFO] 10.244.1.2:40565 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143893s
	[INFO] 10.244.0.3:40418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193796s
	[INFO] 10.244.0.3:44311 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080595s
	[INFO] 10.244.0.3:50008 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092351s
	[INFO] 10.244.0.3:57131 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103833s
	[INFO] 10.244.1.2:35564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165214s
	[INFO] 10.244.1.2:45936 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000209405s
	[INFO] 10.244.1.2:38807 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111289s
	[INFO] 10.244.1.2:43094 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103549s
	[INFO] 10.244.0.3:50964 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077955s
	[INFO] 10.244.0.3:39356 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081605s
	[INFO] 10.244.0.3:60628 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060538s
	[INFO] 10.244.0.3:60679 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069045s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-563238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-563238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-563238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T11_08_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:08:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-563238
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:16:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:14:50 +0000   Mon, 20 May 2024 11:08:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:14:50 +0000   Mon, 20 May 2024 11:08:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:14:50 +0000   Mon, 20 May 2024 11:08:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:14:50 +0000   Mon, 20 May 2024 11:08:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    multinode-563238
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ce1ee2e473ae48a6881d18b833115ca9
	  System UUID:                ce1ee2e4-73ae-48a6-881d-18b833115ca9
	  Boot ID:                    a235f95d-3616-48cf-8c1e-5ae1c11ac3e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-twghk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 coredns-7db6d8ff4d-6pmxg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m40s
	  kube-system                 etcd-multinode-563238                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m55s
	  kube-system                 kindnet-cwg9g                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m40s
	  kube-system                 kube-apiserver-multinode-563238             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m54s
	  kube-system                 kube-controller-manager-multinode-563238    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m54s
	  kube-system                 kube-proxy-k8lrj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	  kube-system                 kube-scheduler-multinode-563238             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m54s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m39s              kube-proxy       
	  Normal  Starting                 75s                kube-proxy       
	  Normal  NodeHasSufficientMemory  8m (x8 over 8m)    kubelet          Node multinode-563238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m (x8 over 8m)    kubelet          Node multinode-563238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m (x7 over 8m)    kubelet          Node multinode-563238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m55s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m55s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m54s              kubelet          Node multinode-563238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m54s              kubelet          Node multinode-563238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m54s              kubelet          Node multinode-563238 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m40s              node-controller  Node multinode-563238 event: Registered Node multinode-563238 in Controller
	  Normal  NodeReady                7m8s               kubelet          Node multinode-563238 status is now: NodeReady
	  Normal  Starting                 82s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  81s (x8 over 82s)  kubelet          Node multinode-563238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s (x8 over 82s)  kubelet          Node multinode-563238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s (x7 over 82s)  kubelet          Node multinode-563238 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           64s                node-controller  Node multinode-563238 event: Registered Node multinode-563238 in Controller
	
	
	Name:               multinode-563238-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-563238-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-563238
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T11_15_27_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:15:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-563238-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:15:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:15:57 +0000   Mon, 20 May 2024 11:15:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:15:57 +0000   Mon, 20 May 2024 11:15:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:15:57 +0000   Mon, 20 May 2024 11:15:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:15:57 +0000   Mon, 20 May 2024 11:15:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    multinode-563238-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b0875580e0ee4b0fb0a716be7e7602df
	  System UUID:                b0875580-e0ee-4b0f-b0a7-16be7e7602df
	  Boot ID:                    2164aa3c-32ef-472a-8942-ecf21157ea5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tc6cb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kindnet-r88l7              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m40s
	  kube-system                 kube-proxy-nqhw5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m36s                  kube-proxy  
	  Normal  Starting                 35s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m41s (x2 over 6m41s)  kubelet     Node multinode-563238-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x2 over 6m41s)  kubelet     Node multinode-563238-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s (x2 over 6m41s)  kubelet     Node multinode-563238-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m31s                  kubelet     Node multinode-563238-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  40s (x2 over 40s)      kubelet     Node multinode-563238-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x2 over 40s)      kubelet     Node multinode-563238-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x2 over 40s)      kubelet     Node multinode-563238-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                31s                    kubelet     Node multinode-563238-m02 status is now: NodeReady
	
	
	Name:               multinode-563238-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-563238-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-563238
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T11_15_56_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:15:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-563238-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:16:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:16:04 +0000   Mon, 20 May 2024 11:15:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:16:04 +0000   Mon, 20 May 2024 11:15:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:16:04 +0000   Mon, 20 May 2024 11:15:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:16:04 +0000   Mon, 20 May 2024 11:16:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    multinode-563238-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 619bc5823af14f16991d58294e19b309
	  System UUID:                619bc582-3af1-4f16-991d-58294e19b309
	  Boot ID:                    986af61d-f80b-4865-a8a0-9dcfded5ffc9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-vq6fj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m52s
	  kube-system                 kube-proxy-5zgms    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m48s                  kube-proxy  
	  Normal  Starting                 8s                     kube-proxy  
	  Normal  Starting                 5m10s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  5m52s (x2 over 5m53s)  kubelet     Node multinode-563238-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m53s)  kubelet     Node multinode-563238-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m52s (x2 over 5m53s)  kubelet     Node multinode-563238-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m44s                  kubelet     Node multinode-563238-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m15s (x2 over 5m15s)  kubelet     Node multinode-563238-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m15s (x2 over 5m15s)  kubelet     Node multinode-563238-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m15s (x2 over 5m15s)  kubelet     Node multinode-563238-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m6s                   kubelet     Node multinode-563238-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  11s (x2 over 11s)      kubelet     Node multinode-563238-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x2 over 11s)      kubelet     Node multinode-563238-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x2 over 11s)      kubelet     Node multinode-563238-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-563238-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.062540] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.191196] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.128374] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.260026] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[May20 11:08] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.491111] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.060565] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.992394] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.103522] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.107523] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.557773] systemd-fstab-generator[1490]: Ignoring "noauto" option for root device
	[ +32.236508] kauditd_printk_skb: 60 callbacks suppressed
	[May20 11:09] kauditd_printk_skb: 14 callbacks suppressed
	[May20 11:14] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.138502] systemd-fstab-generator[2790]: Ignoring "noauto" option for root device
	[  +0.164640] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.136027] systemd-fstab-generator[2816]: Ignoring "noauto" option for root device
	[  +0.289671] systemd-fstab-generator[2844]: Ignoring "noauto" option for root device
	[  +3.949188] systemd-fstab-generator[2945]: Ignoring "noauto" option for root device
	[  +2.104070] systemd-fstab-generator[3070]: Ignoring "noauto" option for root device
	[  +0.080874] kauditd_printk_skb: 122 callbacks suppressed
	[  +6.132797] kauditd_printk_skb: 52 callbacks suppressed
	[May20 11:15] systemd-fstab-generator[3882]: Ignoring "noauto" option for root device
	[  +0.029310] kauditd_printk_skb: 32 callbacks suppressed
	[ +20.902005] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2] <==
	{"level":"info","ts":"2024-05-20T11:08:08.416671Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:08:08.416769Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-05-20T11:09:27.095868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.1027ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13477461232947984917 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:3b098f95aff67a14>","response":"size:40"}
	{"level":"info","ts":"2024-05-20T11:09:27.09673Z","caller":"traceutil/trace.go:171","msg":"trace[1071855207] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"197.350637ms","start":"2024-05-20T11:09:26.899337Z","end":"2024-05-20T11:09:27.096688Z","steps":["trace[1071855207] 'process raft request'  (duration: 197.258542ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:09:27.096805Z","caller":"traceutil/trace.go:171","msg":"trace[2047829939] linearizableReadLoop","detail":"{readStateIndex:505; appliedIndex:504; }","duration":"200.97191ms","start":"2024-05-20T11:09:26.89582Z","end":"2024-05-20T11:09:27.096792Z","steps":["trace[2047829939] 'read index received'  (duration: 61.428µs)","trace[2047829939] 'applied index is now lower than readState.Index'  (duration: 200.909528ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T11:09:27.096979Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.121126ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-563238-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-05-20T11:09:27.098532Z","caller":"traceutil/trace.go:171","msg":"trace[82515920] range","detail":"{range_begin:/registry/minions/multinode-563238-m02; range_end:; response_count:1; response_revision:480; }","duration":"202.725786ms","start":"2024-05-20T11:09:26.895787Z","end":"2024-05-20T11:09:27.098513Z","steps":["trace[82515920] 'agreement among raft nodes before linearized reading'  (duration: 201.058645ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:10:15.225615Z","caller":"traceutil/trace.go:171","msg":"trace[1062220281] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"157.41348ms","start":"2024-05-20T11:10:15.068184Z","end":"2024-05-20T11:10:15.225598Z","steps":["trace[1062220281] 'process raft request'  (duration: 157.363292ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T11:10:15.225781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.177395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T11:10:15.225839Z","caller":"traceutil/trace.go:171","msg":"trace[2128692745] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:609; }","duration":"184.272541ms","start":"2024-05-20T11:10:15.041556Z","end":"2024-05-20T11:10:15.225828Z","steps":["trace[2128692745] 'agreement among raft nodes before linearized reading'  (duration: 184.155971ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:10:15.225664Z","caller":"traceutil/trace.go:171","msg":"trace[714839611] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"188.489094ms","start":"2024-05-20T11:10:15.037167Z","end":"2024-05-20T11:10:15.225656Z","steps":["trace[714839611] 'process raft request'  (duration: 187.043525ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:10:15.225697Z","caller":"traceutil/trace.go:171","msg":"trace[113598982] linearizableReadLoop","detail":"{readStateIndex:650; appliedIndex:649; }","duration":"161.567537ms","start":"2024-05-20T11:10:15.064122Z","end":"2024-05-20T11:10:15.22569Z","steps":["trace[113598982] 'read index received'  (duration: 160.09678ms)","trace[113598982] 'applied index is now lower than readState.Index'  (duration: 1.46993ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T11:10:20.141813Z","caller":"traceutil/trace.go:171","msg":"trace[321737632] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"125.121409ms","start":"2024-05-20T11:10:20.016666Z","end":"2024-05-20T11:10:20.141788Z","steps":["trace[321737632] 'process raft request'  (duration: 124.797511ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:10:22.455112Z","caller":"traceutil/trace.go:171","msg":"trace[1157728450] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"102.133927ms","start":"2024-05-20T11:10:22.352965Z","end":"2024-05-20T11:10:22.455099Z","steps":["trace[1157728450] 'process raft request'  (duration: 100.248801ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:13:07.577671Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-20T11:13:07.577829Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-563238","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	{"level":"warn","ts":"2024-05-20T11:13:07.577925Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T11:13:07.578005Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T11:13:07.601061Z","caller":"v3rpc/watch.go:473","msg":"failed to send watch response to gRPC stream","error":"rpc error: code = Unavailable desc = transport is closing"}
	{"level":"warn","ts":"2024-05-20T11:13:07.635612Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T11:13:07.635686Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T11:13:07.635801Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"44b3a0f32f80bb09","current-leader-member-id":"44b3a0f32f80bb09"}
	{"level":"info","ts":"2024-05-20T11:13:07.640802Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-05-20T11:13:07.640874Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-05-20T11:13:07.6409Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-563238","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	
	
	==> etcd [df66636af8e0435313062c7dcd67889a0f8843ed590fdfecd182d64ddeab1524] <==
	{"level":"info","ts":"2024-05-20T11:14:47.282722Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T11:14:47.282749Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T11:14:47.292541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 switched to configuration voters=(4950477381744769801)"}
	{"level":"info","ts":"2024-05-20T11:14:47.293423Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","added-peer-id":"44b3a0f32f80bb09","added-peer-peer-urls":["https://192.168.39.145:2380"]}
	{"level":"info","ts":"2024-05-20T11:14:47.293597Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:14:47.294727Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:14:47.311054Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T11:14:47.319617Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"44b3a0f32f80bb09","initial-advertise-peer-urls":["https://192.168.39.145:2380"],"listen-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.145:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T11:14:47.325306Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T11:14:47.314529Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-05-20T11:14:47.330373Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-05-20T11:14:48.512159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T11:14:48.512222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T11:14:48.512335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 44b3a0f32f80bb09 at term 2"}
	{"level":"info","ts":"2024-05-20T11:14:48.512355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T11:14:48.512362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 44b3a0f32f80bb09 at term 3"}
	{"level":"info","ts":"2024-05-20T11:14:48.512373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became leader at term 3"}
	{"level":"info","ts":"2024-05-20T11:14:48.512387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 44b3a0f32f80bb09 at term 3"}
	{"level":"info","ts":"2024-05-20T11:14:48.518088Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"44b3a0f32f80bb09","local-member-attributes":"{Name:multinode-563238 ClientURLs:[https://192.168.39.145:2379]}","request-path":"/0/members/44b3a0f32f80bb09/attributes","cluster-id":"33ee9922f2bf4379","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T11:14:48.51815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:14:48.518921Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:14:48.519116Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T11:14:48.519162Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T11:14:48.52192Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T11:14:48.521928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.145:2379"}
	
	
	==> kernel <==
	 11:16:07 up 8 min,  0 users,  load average: 0.58, 0.40, 0.20
	Linux multinode-563238 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [397337b81f6ac7360b0b3ad1100eb29a9fa3356d7b0e278157270ea471949011] <==
	I0520 11:15:23.042052       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	I0520 11:15:33.054832       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:15:33.054869       1 main.go:227] handling current node
	I0520 11:15:33.054879       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:15:33.054885       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:15:33.054996       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0520 11:15:33.055021       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	I0520 11:15:43.064806       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:15:43.064847       1 main.go:227] handling current node
	I0520 11:15:43.064857       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:15:43.064881       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:15:43.064986       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0520 11:15:43.065012       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	I0520 11:15:53.079833       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:15:53.080080       1 main.go:227] handling current node
	I0520 11:15:53.080130       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:15:53.080157       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:15:53.080746       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0520 11:15:53.080869       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	I0520 11:16:03.092761       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:16:03.092795       1 main.go:227] handling current node
	I0520 11:16:03.092804       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:16:03.092810       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:16:03.092902       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0520 11:16:03.092906       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654] <==
	I0520 11:12:19.158032       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	I0520 11:12:29.167056       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:12:29.167212       1 main.go:227] handling current node
	I0520 11:12:29.167252       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:12:29.167343       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:12:29.167473       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0520 11:12:29.167496       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	I0520 11:12:39.178379       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:12:39.178551       1 main.go:227] handling current node
	I0520 11:12:39.178592       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:12:39.178613       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:12:39.178718       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0520 11:12:39.178753       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	I0520 11:12:49.184059       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:12:49.184098       1 main.go:227] handling current node
	I0520 11:12:49.184107       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:12:49.184112       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:12:49.184215       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0520 11:12:49.184241       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	I0520 11:12:59.194893       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:12:59.194932       1 main.go:227] handling current node
	I0520 11:12:59.194942       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:12:59.194948       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:12:59.195061       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0520 11:12:59.195094       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b22c7ac35dfae92cb27f8fd0ba826b1c7ef672880a5116e94273794063caa1f0] <==
	I0520 11:14:49.857904       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0520 11:14:49.892841       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 11:14:49.894490       1 aggregator.go:165] initial CRD sync complete...
	I0520 11:14:49.894522       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 11:14:49.894528       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 11:14:49.937883       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 11:14:49.937973       1 policy_source.go:224] refreshing policies
	I0520 11:14:49.943622       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 11:14:50.005779       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 11:14:50.005902       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 11:14:50.006566       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 11:14:50.006635       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 11:14:50.005452       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 11:14:50.007036       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 11:14:50.008762       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 11:14:50.011960       1 cache.go:39] Caches are synced for autoregister controller
	I0520 11:14:50.012827       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 11:14:50.791905       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 11:14:51.441648       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 11:14:51.550393       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 11:14:51.567705       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 11:14:51.640448       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 11:14:51.649486       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 11:15:03.144216       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 11:15:03.183969       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c] <==
	E0520 11:09:45.024585       1 conn.go:339] Error on socket receive: read tcp 192.168.39.145:8443->192.168.39.1:48348: use of closed network connection
	E0520 11:09:45.180746       1 conn.go:339] Error on socket receive: read tcp 192.168.39.145:8443->192.168.39.1:48364: use of closed network connection
	E0520 11:09:45.350838       1 conn.go:339] Error on socket receive: read tcp 192.168.39.145:8443->192.168.39.1:48372: use of closed network connection
	I0520 11:13:07.559547       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0520 11:13:07.589747       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.589836       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.589874       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.590255       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.595911       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0520 11:13:07.596074       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	E0520 11:13:07.596110       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596157       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596189       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596721       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596763       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596796       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596827       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596856       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596885       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596913       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596942       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0520 11:13:07.598054       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0520 11:13:07.598510       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0520 11:13:07.599229       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0520 11:13:07.600130       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0] <==
	I0520 11:09:27.159699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-563238-m02"
	I0520 11:09:27.167521       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-563238-m02" podCIDRs=["10.244.1.0/24"]
	I0520 11:09:36.779874       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:09:39.219378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.728613ms"
	I0520 11:09:39.238052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.369467ms"
	I0520 11:09:39.238444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.67µs"
	I0520 11:09:39.242553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.436µs"
	I0520 11:09:39.250372       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.642µs"
	I0520 11:09:42.904614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.88734ms"
	I0520 11:09:42.904799       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.788µs"
	I0520 11:09:43.326008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.305064ms"
	I0520 11:09:43.326088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.878µs"
	I0520 11:10:15.228486       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-563238-m03\" does not exist"
	I0520 11:10:15.228956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:10:15.245744       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-563238-m03" podCIDRs=["10.244.2.0/24"]
	I0520 11:10:17.182514       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-563238-m03"
	I0520 11:10:23.988501       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:10:52.074193       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:10:53.020914       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:10:53.022972       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-563238-m03\" does not exist"
	I0520 11:10:53.027543       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-563238-m03" podCIDRs=["10.244.3.0/24"]
	I0520 11:11:01.903730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:11:42.236381       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m03"
	I0520 11:11:42.272161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.427285ms"
	I0520 11:11:42.272632       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.58µs"
	
	
	==> kube-controller-manager [f840bd971547b956c8859cc8caf2a99d30b8a5a0cca4b22a581fc69a4a511631] <==
	I0520 11:15:03.782358       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 11:15:03.782465       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 11:15:23.265166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.113963ms"
	I0520 11:15:23.265375       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="144.642µs"
	I0520 11:15:23.274976       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.357547ms"
	I0520 11:15:23.276078       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.732µs"
	I0520 11:15:23.276208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.752µs"
	I0520 11:15:27.436601       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-563238-m02\" does not exist"
	I0520 11:15:27.459890       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-563238-m02" podCIDRs=["10.244.1.0/24"]
	I0520 11:15:29.359188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.462µs"
	I0520 11:15:29.427749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.059µs"
	I0520 11:15:29.439963       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.903µs"
	I0520 11:15:29.444701       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.927µs"
	I0520 11:15:29.448765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.483µs"
	I0520 11:15:32.770152       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.129µs"
	I0520 11:15:36.726971       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:15:36.748926       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.93µs"
	I0520 11:15:36.763557       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.705µs"
	I0520 11:15:39.611619       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.06113ms"
	I0520 11:15:39.611990       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.901µs"
	I0520 11:15:55.180118       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:15:56.243599       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:15:56.244692       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-563238-m03\" does not exist"
	I0520 11:15:56.257884       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-563238-m03" podCIDRs=["10.244.2.0/24"]
	I0520 11:16:04.456720       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	
	
	==> kube-proxy [39aa15086d2d3e41c85d2d745833aa96e969f5e427873f658f886879d8f98195] <==
	I0520 11:14:52.415466       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:14:52.454743       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
	I0520 11:14:52.511770       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:14:52.512364       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:14:52.512470       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:14:52.517510       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:14:52.517746       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:14:52.517778       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:14:52.518890       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:14:52.520392       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:14:52.520992       1 config.go:192] "Starting service config controller"
	I0520 11:14:52.521063       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:14:52.521142       1 config.go:319] "Starting node config controller"
	I0520 11:14:52.521183       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:14:52.620826       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 11:14:52.621895       1 shared_informer.go:320] Caches are synced for node config
	I0520 11:14:52.621911       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879] <==
	I0520 11:08:28.635012       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:08:28.653163       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
	I0520 11:08:28.696062       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:08:28.696157       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:08:28.696186       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:08:28.700159       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:08:28.700486       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:08:28.700518       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:08:28.701807       1 config.go:192] "Starting service config controller"
	I0520 11:08:28.701855       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:08:28.701883       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:08:28.701887       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:08:28.703686       1 config.go:319] "Starting node config controller"
	I0520 11:08:28.703719       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:08:28.802943       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 11:08:28.803071       1 shared_informer.go:320] Caches are synced for service config
	I0520 11:08:28.804550       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0bdd7e83e2d02167020ebbe5af1a67b7f788b7093c8cef267a78b7bfcb189d68] <==
	I0520 11:14:47.809936       1 serving.go:380] Generated self-signed cert in-memory
	W0520 11:14:49.868808       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 11:14:49.868883       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 11:14:49.868913       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 11:14:49.868941       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 11:14:49.902994       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 11:14:49.905344       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:14:49.909733       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 11:14:49.909925       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 11:14:49.909968       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:14:49.910050       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 11:14:50.010970       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9] <==
	E0520 11:08:10.676462       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 11:08:10.675383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 11:08:10.676501       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 11:08:10.675557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:08:10.676537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:08:11.557260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:08:11.557373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:08:11.619656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:08:11.619772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 11:08:11.753514       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 11:08:11.753631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 11:08:11.821659       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 11:08:11.822800       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 11:08:11.822588       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 11:08:11.822895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 11:08:11.894544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 11:08:11.894589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 11:08:11.908059       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:08:11.909024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 11:08:11.908584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:08:11.909163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 11:08:11.912474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 11:08:11.913823       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0520 11:08:12.266335       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 11:13:07.566145       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 20 11:14:50 multinode-563238 kubelet[3077]: I0520 11:14:50.041651    3077 kubelet_node_status.go:112] "Node was previously registered" node="multinode-563238"
	May 20 11:14:50 multinode-563238 kubelet[3077]: I0520 11:14:50.041818    3077 kubelet_node_status.go:76] "Successfully registered node" node="multinode-563238"
	May 20 11:14:50 multinode-563238 kubelet[3077]: I0520 11:14:50.044932    3077 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 11:14:50 multinode-563238 kubelet[3077]: I0520 11:14:50.045832    3077 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 11:14:50 multinode-563238 kubelet[3077]: E0520 11:14:50.069788    3077 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-563238\" already exists" pod="kube-system/kube-apiserver-multinode-563238"
	May 20 11:14:50 multinode-563238 kubelet[3077]: E0520 11:14:50.979971    3077 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:50 multinode-563238 kubelet[3077]: E0520 11:14:50.980556    3077 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b3c92bc-3be5-42a6-b8da-218ad474da19-kube-proxy podName:5b3c92bc-3be5-42a6-b8da-218ad474da19 nodeName:}" failed. No retries permitted until 2024-05-20 11:14:51.480504921 +0000 UTC m=+5.698735219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/5b3c92bc-3be5-42a6-b8da-218ad474da19-kube-proxy") pod "kube-proxy-k8lrj" (UID: "5b3c92bc-3be5-42a6-b8da-218ad474da19") : failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.041190    3077 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.041373    3077 projected.go:200] Error preparing data for projected volume kube-api-access-t4bxq for pod kube-system/storage-provisioner: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.041513    3077 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edef1751-3621-4446-b25e-bf6b4411c4ca-kube-api-access-t4bxq podName:edef1751-3621-4446-b25e-bf6b4411c4ca nodeName:}" failed. No retries permitted until 2024-05-20 11:14:51.541495401 +0000 UTC m=+5.759725710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t4bxq" (UniqueName: "kubernetes.io/projected/edef1751-3621-4446-b25e-bf6b4411c4ca-kube-api-access-t4bxq") pod "storage-provisioner" (UID: "edef1751-3621-4446-b25e-bf6b4411c4ca") : failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.045439    3077 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.045480    3077 projected.go:200] Error preparing data for projected volume kube-api-access-njlhd for pod kube-system/coredns-7db6d8ff4d-6pmxg: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.045526    3077 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccf30108-39a4-4d98-8f61-27d16c57ba3d-kube-api-access-njlhd podName:ccf30108-39a4-4d98-8f61-27d16c57ba3d nodeName:}" failed. No retries permitted until 2024-05-20 11:14:51.545512173 +0000 UTC m=+5.763742482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-njlhd" (UniqueName: "kubernetes.io/projected/ccf30108-39a4-4d98-8f61-27d16c57ba3d-kube-api-access-njlhd") pod "coredns-7db6d8ff4d-6pmxg" (UID: "ccf30108-39a4-4d98-8f61-27d16c57ba3d") : failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.053673    3077 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.053715    3077 projected.go:200] Error preparing data for projected volume kube-api-access-vfxp7 for pod kube-system/kindnet-cwg9g: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.053756    3077 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc5ffb6e-96f3-49d6-97e9-efa5adf9f489-kube-api-access-vfxp7 podName:cc5ffb6e-96f3-49d6-97e9-efa5adf9f489 nodeName:}" failed. No retries permitted until 2024-05-20 11:14:51.553744003 +0000 UTC m=+5.771974312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vfxp7" (UniqueName: "kubernetes.io/projected/cc5ffb6e-96f3-49d6-97e9-efa5adf9f489-kube-api-access-vfxp7") pod "kindnet-cwg9g" (UID: "cc5ffb6e-96f3-49d6-97e9-efa5adf9f489") : failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.053777    3077 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.053784    3077 projected.go:200] Error preparing data for projected volume kube-api-access-xtpvw for pod kube-system/kube-proxy-k8lrj: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.053804    3077 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5b3c92bc-3be5-42a6-b8da-218ad474da19-kube-api-access-xtpvw podName:5b3c92bc-3be5-42a6-b8da-218ad474da19 nodeName:}" failed. No retries permitted until 2024-05-20 11:14:51.553798548 +0000 UTC m=+5.772028857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xtpvw" (UniqueName: "kubernetes.io/projected/5b3c92bc-3be5-42a6-b8da-218ad474da19-kube-api-access-xtpvw") pod "kube-proxy-k8lrj" (UID: "5b3c92bc-3be5-42a6-b8da-218ad474da19") : failed to sync configmap cache: timed out waiting for the condition
	May 20 11:15:00 multinode-563238 kubelet[3077]: I0520 11:15:00.033562    3077 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 20 11:15:45 multinode-563238 kubelet[3077]: E0520 11:15:45.969096    3077 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:15:45 multinode-563238 kubelet[3077]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:15:45 multinode-563238 kubelet[3077]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:15:45 multinode-563238 kubelet[3077]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:15:45 multinode-563238 kubelet[3077]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:16:06.966494   41412 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18925-3819/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-563238 -n multinode-563238
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-563238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (304.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 stop
E0520 11:17:19.192432   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-563238 stop: exit status 82 (2m0.455217949s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-563238-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-563238 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-563238 status: exit status 3 (18.810428954s)

                                                
                                                
-- stdout --
	multinode-563238
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-563238-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:18:30.157251   42561 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	E0520 11:18:30.157285   42561 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-563238 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-563238 -n multinode-563238
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-563238 logs -n 25: (1.446681353s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp multinode-563238-m02:/home/docker/cp-test.txt                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238:/home/docker/cp-test_multinode-563238-m02_multinode-563238.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n multinode-563238 sudo cat                                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-563238-m02_multinode-563238.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp multinode-563238-m02:/home/docker/cp-test.txt                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m03:/home/docker/cp-test_multinode-563238-m02_multinode-563238-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n multinode-563238-m03 sudo cat                                   | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-563238-m02_multinode-563238-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp testdata/cp-test.txt                                                | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp multinode-563238-m03:/home/docker/cp-test.txt                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2129143398/001/cp-test_multinode-563238-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp multinode-563238-m03:/home/docker/cp-test.txt                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238:/home/docker/cp-test_multinode-563238-m03_multinode-563238.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n multinode-563238 sudo cat                                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-563238-m03_multinode-563238.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-563238 cp multinode-563238-m03:/home/docker/cp-test.txt                       | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m02:/home/docker/cp-test_multinode-563238-m03_multinode-563238-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n                                                                 | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | multinode-563238-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-563238 ssh -n multinode-563238-m02 sudo cat                                   | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	|         | /home/docker/cp-test_multinode-563238-m03_multinode-563238-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-563238 node stop m03                                                          | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:10 UTC |
	| node    | multinode-563238 node start                                                             | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:10 UTC | 20 May 24 11:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-563238                                                                | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:11 UTC |                     |
	| stop    | -p multinode-563238                                                                     | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:11 UTC |                     |
	| start   | -p multinode-563238                                                                     | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:13 UTC | 20 May 24 11:16 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-563238                                                                | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:16 UTC |                     |
	| node    | multinode-563238 node delete                                                            | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:16 UTC | 20 May 24 11:16 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-563238 stop                                                                   | multinode-563238 | jenkins | v1.33.1 | 20 May 24 11:16 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:13:06
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:13:06.580258   40362 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:13:06.580529   40362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:13:06.580539   40362 out.go:304] Setting ErrFile to fd 2...
	I0520 11:13:06.580543   40362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:13:06.580708   40362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:13:06.581256   40362 out.go:298] Setting JSON to false
	I0520 11:13:06.582137   40362 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3330,"bootTime":1716200257,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:13:06.582195   40362 start.go:139] virtualization: kvm guest
	I0520 11:13:06.584352   40362 out.go:177] * [multinode-563238] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:13:06.585661   40362 notify.go:220] Checking for updates...
	I0520 11:13:06.585682   40362 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:13:06.587033   40362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:13:06.588302   40362 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:13:06.589537   40362 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:13:06.590764   40362 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:13:06.592138   40362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:13:06.593729   40362 config.go:182] Loaded profile config "multinode-563238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:13:06.593840   40362 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:13:06.594208   40362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:13:06.594262   40362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:13:06.609195   40362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42621
	I0520 11:13:06.609626   40362 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:13:06.610265   40362 main.go:141] libmachine: Using API Version  1
	I0520 11:13:06.610292   40362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:13:06.610710   40362 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:13:06.610951   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:13:06.645584   40362 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 11:13:06.646930   40362 start.go:297] selected driver: kvm2
	I0520 11:13:06.646948   40362 start.go:901] validating driver "kvm2" against &{Name:multinode-563238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-563238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:13:06.647091   40362 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:13:06.647463   40362 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:13:06.647560   40362 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:13:06.662193   40362 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:13:06.662886   40362 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:13:06.662918   40362 cni.go:84] Creating CNI manager for ""
	I0520 11:13:06.662930   40362 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0520 11:13:06.662991   40362 start.go:340] cluster config:
	{Name:multinode-563238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-563238 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:13:06.663126   40362 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:13:06.665828   40362 out.go:177] * Starting "multinode-563238" primary control-plane node in "multinode-563238" cluster
	I0520 11:13:06.666953   40362 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:13:06.666985   40362 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 11:13:06.666997   40362 cache.go:56] Caching tarball of preloaded images
	I0520 11:13:06.667073   40362 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:13:06.667085   40362 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 11:13:06.667216   40362 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/config.json ...
	I0520 11:13:06.667420   40362 start.go:360] acquireMachinesLock for multinode-563238: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:13:06.667473   40362 start.go:364] duration metric: took 33.362µs to acquireMachinesLock for "multinode-563238"
	I0520 11:13:06.667492   40362 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:13:06.667502   40362 fix.go:54] fixHost starting: 
	I0520 11:13:06.667754   40362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:13:06.667790   40362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:13:06.681455   40362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I0520 11:13:06.681886   40362 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:13:06.682372   40362 main.go:141] libmachine: Using API Version  1
	I0520 11:13:06.682400   40362 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:13:06.682658   40362 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:13:06.682834   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:13:06.682984   40362 main.go:141] libmachine: (multinode-563238) Calling .GetState
	I0520 11:13:06.684407   40362 fix.go:112] recreateIfNeeded on multinode-563238: state=Running err=<nil>
	W0520 11:13:06.684439   40362 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:13:06.686100   40362 out.go:177] * Updating the running kvm2 "multinode-563238" VM ...
	I0520 11:13:06.687256   40362 machine.go:94] provisionDockerMachine start ...
	I0520 11:13:06.687270   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:13:06.687454   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:13:06.689884   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:06.690355   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:06.690372   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:06.690459   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:13:06.690614   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:06.690743   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:06.690881   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:13:06.691027   40362 main.go:141] libmachine: Using SSH client type: native
	I0520 11:13:06.691242   40362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0520 11:13:06.691254   40362 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:13:06.806394   40362 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-563238
	
	I0520 11:13:06.806424   40362 main.go:141] libmachine: (multinode-563238) Calling .GetMachineName
	I0520 11:13:06.806677   40362 buildroot.go:166] provisioning hostname "multinode-563238"
	I0520 11:13:06.806704   40362 main.go:141] libmachine: (multinode-563238) Calling .GetMachineName
	I0520 11:13:06.806883   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:13:06.809489   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:06.809840   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:06.809861   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:06.809973   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:13:06.810153   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:06.810331   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:06.810465   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:13:06.810621   40362 main.go:141] libmachine: Using SSH client type: native
	I0520 11:13:06.810837   40362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0520 11:13:06.810849   40362 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-563238 && echo "multinode-563238" | sudo tee /etc/hostname
	I0520 11:13:06.932778   40362 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-563238
	
	I0520 11:13:06.932817   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:13:06.935531   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:06.935893   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:06.935931   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:06.936082   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:13:06.936242   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:06.936418   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:06.936546   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:13:06.936677   40362 main.go:141] libmachine: Using SSH client type: native
	I0520 11:13:06.936837   40362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0520 11:13:06.936853   40362 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-563238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-563238/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-563238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:13:07.046173   40362 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:13:07.046207   40362 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:13:07.046234   40362 buildroot.go:174] setting up certificates
	I0520 11:13:07.046241   40362 provision.go:84] configureAuth start
	I0520 11:13:07.046249   40362 main.go:141] libmachine: (multinode-563238) Calling .GetMachineName
	I0520 11:13:07.046490   40362 main.go:141] libmachine: (multinode-563238) Calling .GetIP
	I0520 11:13:07.049140   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.049510   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:07.049537   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.049718   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:13:07.051745   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.052053   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:07.052074   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.052230   40362 provision.go:143] copyHostCerts
	I0520 11:13:07.052256   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:13:07.052283   40362 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:13:07.052298   40362 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:13:07.052377   40362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:13:07.052469   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:13:07.052487   40362 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:13:07.052491   40362 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:13:07.052516   40362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:13:07.052578   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:13:07.052596   40362 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:13:07.052603   40362 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:13:07.052626   40362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:13:07.052724   40362 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.multinode-563238 san=[127.0.0.1 192.168.39.145 localhost minikube multinode-563238]
	I0520 11:13:07.262000   40362 provision.go:177] copyRemoteCerts
	I0520 11:13:07.262053   40362 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:13:07.262075   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:13:07.264651   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.265029   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:07.265062   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.265193   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:13:07.265404   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:07.265568   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:13:07.265686   40362 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/multinode-563238/id_rsa Username:docker}
	I0520 11:13:07.351571   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0520 11:13:07.351637   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:13:07.381626   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0520 11:13:07.381693   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:13:07.406311   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0520 11:13:07.406375   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0520 11:13:07.431518   40362 provision.go:87] duration metric: took 385.265618ms to configureAuth
	I0520 11:13:07.431551   40362 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:13:07.431815   40362 config.go:182] Loaded profile config "multinode-563238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:13:07.431908   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:13:07.434370   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.434721   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:13:07.434751   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:13:07.434897   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:13:07.435076   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:07.435211   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:13:07.435329   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:13:07.435471   40362 main.go:141] libmachine: Using SSH client type: native
	I0520 11:13:07.435646   40362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0520 11:13:07.435660   40362 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:14:38.274132   40362 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:14:38.274162   40362 machine.go:97] duration metric: took 1m31.586896667s to provisionDockerMachine
	I0520 11:14:38.274173   40362 start.go:293] postStartSetup for "multinode-563238" (driver="kvm2")
	I0520 11:14:38.274183   40362 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:14:38.274198   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:14:38.274555   40362 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:14:38.274584   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:14:38.277536   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.277911   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:14:38.277931   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.278133   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:14:38.278317   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:14:38.278488   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:14:38.278631   40362 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/multinode-563238/id_rsa Username:docker}
	I0520 11:14:38.364425   40362 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:14:38.368757   40362 command_runner.go:130] > NAME=Buildroot
	I0520 11:14:38.368774   40362 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0520 11:14:38.368778   40362 command_runner.go:130] > ID=buildroot
	I0520 11:14:38.368783   40362 command_runner.go:130] > VERSION_ID=2023.02.9
	I0520 11:14:38.368788   40362 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0520 11:14:38.368920   40362 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:14:38.368956   40362 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:14:38.369035   40362 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:14:38.369125   40362 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:14:38.369137   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /etc/ssl/certs/111622.pem
	I0520 11:14:38.369220   40362 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:14:38.378382   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:14:38.403233   40362 start.go:296] duration metric: took 129.047244ms for postStartSetup
	I0520 11:14:38.403298   40362 fix.go:56] duration metric: took 1m31.735795776s for fixHost
	I0520 11:14:38.403328   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:14:38.405873   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.406306   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:14:38.406337   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.406476   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:14:38.406689   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:14:38.406844   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:14:38.406975   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:14:38.407121   40362 main.go:141] libmachine: Using SSH client type: native
	I0520 11:14:38.407321   40362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.145 22 <nil> <nil>}
	I0520 11:14:38.407337   40362 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:14:38.514504   40362 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716203678.495711872
	
	I0520 11:14:38.514534   40362 fix.go:216] guest clock: 1716203678.495711872
	I0520 11:14:38.514543   40362 fix.go:229] Guest: 2024-05-20 11:14:38.495711872 +0000 UTC Remote: 2024-05-20 11:14:38.403306332 +0000 UTC m=+91.856185819 (delta=92.40554ms)
	I0520 11:14:38.514570   40362 fix.go:200] guest clock delta is within tolerance: 92.40554ms
	I0520 11:14:38.514579   40362 start.go:83] releasing machines lock for "multinode-563238", held for 1m31.847095704s
	I0520 11:14:38.514607   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:14:38.514898   40362 main.go:141] libmachine: (multinode-563238) Calling .GetIP
	I0520 11:14:38.517488   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.517891   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:14:38.517917   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.518073   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:14:38.518526   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:14:38.518694   40362 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:14:38.518759   40362 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:14:38.518833   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:14:38.518941   40362 ssh_runner.go:195] Run: cat /version.json
	I0520 11:14:38.518966   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:14:38.521319   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.521416   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.521708   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:14:38.521733   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.521883   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:14:38.521899   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:14:38.521919   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:38.522096   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:14:38.522104   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:14:38.522264   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:14:38.522272   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:14:38.522407   40362 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:14:38.522403   40362 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/multinode-563238/id_rsa Username:docker}
	I0520 11:14:38.522535   40362 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/multinode-563238/id_rsa Username:docker}
	I0520 11:14:38.597988   40362 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44", "minikube_version": "v1.33.0", "commit": "834a374b6ab6f5588f185542d3297469bec856cc"}
	W0520 11:14:38.598200   40362 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:14:38.598305   40362 ssh_runner.go:195] Run: systemctl --version
	I0520 11:14:38.621915   40362 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0520 11:14:38.622454   40362 command_runner.go:130] > systemd 252 (252)
	I0520 11:14:38.622483   40362 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0520 11:14:38.622535   40362 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:14:38.781416   40362 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 11:14:38.788944   40362 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0520 11:14:38.789213   40362 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:14:38.789265   40362 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:14:38.798884   40362 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 11:14:38.798906   40362 start.go:494] detecting cgroup driver to use...
	I0520 11:14:38.798953   40362 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:14:38.815581   40362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:14:38.829309   40362 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:14:38.829350   40362 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:14:38.842495   40362 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:14:38.856035   40362 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:14:38.997688   40362 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:14:39.135873   40362 docker.go:233] disabling docker service ...
	I0520 11:14:39.135942   40362 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:14:39.153541   40362 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:14:39.166895   40362 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:14:39.305230   40362 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:14:39.444348   40362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:14:39.458428   40362 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:14:39.476968   40362 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0520 11:14:39.477431   40362 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:14:39.477495   40362 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.487995   40362 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:14:39.488044   40362 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.498290   40362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.508606   40362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.518699   40362 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:14:39.529275   40362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.539244   40362 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.551876   40362 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:14:39.562545   40362 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:14:39.572208   40362 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0520 11:14:39.572260   40362 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:14:39.581762   40362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:14:39.744659   40362 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:14:43.214343   40362 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.469645141s)
	I0520 11:14:43.214380   40362 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:14:43.214425   40362 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:14:43.219563   40362 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0520 11:14:43.219589   40362 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0520 11:14:43.219597   40362 command_runner.go:130] > Device: 0,22	Inode: 1324        Links: 1
	I0520 11:14:43.219607   40362 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 11:14:43.219616   40362 command_runner.go:130] > Access: 2024-05-20 11:14:43.085564325 +0000
	I0520 11:14:43.219625   40362 command_runner.go:130] > Modify: 2024-05-20 11:14:43.085564325 +0000
	I0520 11:14:43.219633   40362 command_runner.go:130] > Change: 2024-05-20 11:14:43.085564325 +0000
	I0520 11:14:43.219638   40362 command_runner.go:130] >  Birth: -
	I0520 11:14:43.219961   40362 start.go:562] Will wait 60s for crictl version
	I0520 11:14:43.220019   40362 ssh_runner.go:195] Run: which crictl
	I0520 11:14:43.223850   40362 command_runner.go:130] > /usr/bin/crictl
	I0520 11:14:43.224026   40362 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:14:43.264030   40362 command_runner.go:130] > Version:  0.1.0
	I0520 11:14:43.264059   40362 command_runner.go:130] > RuntimeName:  cri-o
	I0520 11:14:43.264064   40362 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0520 11:14:43.264070   40362 command_runner.go:130] > RuntimeApiVersion:  v1
	I0520 11:14:43.265433   40362 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:14:43.265508   40362 ssh_runner.go:195] Run: crio --version
	I0520 11:14:43.293207   40362 command_runner.go:130] > crio version 1.29.1
	I0520 11:14:43.293236   40362 command_runner.go:130] > Version:        1.29.1
	I0520 11:14:43.293244   40362 command_runner.go:130] > GitCommit:      unknown
	I0520 11:14:43.293248   40362 command_runner.go:130] > GitCommitDate:  unknown
	I0520 11:14:43.293252   40362 command_runner.go:130] > GitTreeState:   clean
	I0520 11:14:43.293257   40362 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 11:14:43.293261   40362 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 11:14:43.293264   40362 command_runner.go:130] > Compiler:       gc
	I0520 11:14:43.293269   40362 command_runner.go:130] > Platform:       linux/amd64
	I0520 11:14:43.293277   40362 command_runner.go:130] > Linkmode:       dynamic
	I0520 11:14:43.293282   40362 command_runner.go:130] > BuildTags:      
	I0520 11:14:43.293286   40362 command_runner.go:130] >   containers_image_ostree_stub
	I0520 11:14:43.293292   40362 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 11:14:43.293299   40362 command_runner.go:130] >   btrfs_noversion
	I0520 11:14:43.293306   40362 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 11:14:43.293314   40362 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 11:14:43.293321   40362 command_runner.go:130] >   seccomp
	I0520 11:14:43.293328   40362 command_runner.go:130] > LDFlags:          unknown
	I0520 11:14:43.293339   40362 command_runner.go:130] > SeccompEnabled:   true
	I0520 11:14:43.293344   40362 command_runner.go:130] > AppArmorEnabled:  false
	I0520 11:14:43.294554   40362 ssh_runner.go:195] Run: crio --version
	I0520 11:14:43.323133   40362 command_runner.go:130] > crio version 1.29.1
	I0520 11:14:43.323151   40362 command_runner.go:130] > Version:        1.29.1
	I0520 11:14:43.323157   40362 command_runner.go:130] > GitCommit:      unknown
	I0520 11:14:43.323161   40362 command_runner.go:130] > GitCommitDate:  unknown
	I0520 11:14:43.323164   40362 command_runner.go:130] > GitTreeState:   clean
	I0520 11:14:43.323169   40362 command_runner.go:130] > BuildDate:      2024-05-13T16:07:33Z
	I0520 11:14:43.323173   40362 command_runner.go:130] > GoVersion:      go1.21.6
	I0520 11:14:43.323178   40362 command_runner.go:130] > Compiler:       gc
	I0520 11:14:43.323182   40362 command_runner.go:130] > Platform:       linux/amd64
	I0520 11:14:43.323186   40362 command_runner.go:130] > Linkmode:       dynamic
	I0520 11:14:43.323190   40362 command_runner.go:130] > BuildTags:      
	I0520 11:14:43.323194   40362 command_runner.go:130] >   containers_image_ostree_stub
	I0520 11:14:43.323198   40362 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0520 11:14:43.323204   40362 command_runner.go:130] >   btrfs_noversion
	I0520 11:14:43.323208   40362 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0520 11:14:43.323213   40362 command_runner.go:130] >   libdm_no_deferred_remove
	I0520 11:14:43.323217   40362 command_runner.go:130] >   seccomp
	I0520 11:14:43.323221   40362 command_runner.go:130] > LDFlags:          unknown
	I0520 11:14:43.323226   40362 command_runner.go:130] > SeccompEnabled:   true
	I0520 11:14:43.323230   40362 command_runner.go:130] > AppArmorEnabled:  false
	I0520 11:14:43.325523   40362 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:14:43.327250   40362 main.go:141] libmachine: (multinode-563238) Calling .GetIP
	I0520 11:14:43.329639   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:43.330041   40362 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:14:43.330072   40362 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:14:43.330275   40362 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 11:14:43.334369   40362 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0520 11:14:43.334453   40362 kubeadm.go:877] updating cluster {Name:multinode-563238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-563238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:14:43.334575   40362 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:14:43.334614   40362 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:14:43.383753   40362 command_runner.go:130] > {
	I0520 11:14:43.383782   40362 command_runner.go:130] >   "images": [
	I0520 11:14:43.383789   40362 command_runner.go:130] >     {
	I0520 11:14:43.383809   40362 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 11:14:43.383816   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.383824   40362 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 11:14:43.383830   40362 command_runner.go:130] >       ],
	I0520 11:14:43.383835   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.383848   40362 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 11:14:43.383862   40362 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 11:14:43.383870   40362 command_runner.go:130] >       ],
	I0520 11:14:43.383880   40362 command_runner.go:130] >       "size": "65291810",
	I0520 11:14:43.383890   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.383897   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.383910   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.383919   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.383926   40362 command_runner.go:130] >     },
	I0520 11:14:43.383934   40362 command_runner.go:130] >     {
	I0520 11:14:43.383945   40362 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0520 11:14:43.383952   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.383961   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0520 11:14:43.383969   40362 command_runner.go:130] >       ],
	I0520 11:14:43.383976   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.383990   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0520 11:14:43.384005   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0520 11:14:43.384012   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384021   40362 command_runner.go:130] >       "size": "1363676",
	I0520 11:14:43.384032   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.384043   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.384048   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.384055   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.384064   40362 command_runner.go:130] >     },
	I0520 11:14:43.384070   40362 command_runner.go:130] >     {
	I0520 11:14:43.384083   40362 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 11:14:43.384093   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.384103   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 11:14:43.384111   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384118   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.384135   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 11:14:43.384151   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 11:14:43.384161   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384168   40362 command_runner.go:130] >       "size": "31470524",
	I0520 11:14:43.384177   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.384185   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.384194   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.384203   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.384210   40362 command_runner.go:130] >     },
	I0520 11:14:43.384217   40362 command_runner.go:130] >     {
	I0520 11:14:43.384231   40362 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 11:14:43.384240   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.384249   40362 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 11:14:43.384257   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384265   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.384280   40362 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 11:14:43.384301   40362 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 11:14:43.384310   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384318   40362 command_runner.go:130] >       "size": "61245718",
	I0520 11:14:43.384328   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.384339   40362 command_runner.go:130] >       "username": "nonroot",
	I0520 11:14:43.384348   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.384356   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.384363   40362 command_runner.go:130] >     },
	I0520 11:14:43.384371   40362 command_runner.go:130] >     {
	I0520 11:14:43.384383   40362 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 11:14:43.384392   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.384400   40362 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 11:14:43.384409   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384417   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.384431   40362 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 11:14:43.384446   40362 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 11:14:43.384454   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384461   40362 command_runner.go:130] >       "size": "150779692",
	I0520 11:14:43.384471   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.384479   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.384488   40362 command_runner.go:130] >       },
	I0520 11:14:43.384495   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.384505   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.384513   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.384522   40362 command_runner.go:130] >     },
	I0520 11:14:43.384528   40362 command_runner.go:130] >     {
	I0520 11:14:43.384539   40362 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 11:14:43.384550   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.384559   40362 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 11:14:43.384569   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384577   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.384593   40362 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 11:14:43.384608   40362 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 11:14:43.384617   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384624   40362 command_runner.go:130] >       "size": "117601759",
	I0520 11:14:43.384634   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.384641   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.384650   40362 command_runner.go:130] >       },
	I0520 11:14:43.384657   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.384667   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.384677   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.384683   40362 command_runner.go:130] >     },
	I0520 11:14:43.384691   40362 command_runner.go:130] >     {
	I0520 11:14:43.384702   40362 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 11:14:43.384712   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.384724   40362 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 11:14:43.384732   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384740   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.384756   40362 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 11:14:43.384773   40362 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 11:14:43.384781   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384788   40362 command_runner.go:130] >       "size": "112170310",
	I0520 11:14:43.384802   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.384812   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.384819   40362 command_runner.go:130] >       },
	I0520 11:14:43.384827   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.384837   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.384846   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.384852   40362 command_runner.go:130] >     },
	I0520 11:14:43.384860   40362 command_runner.go:130] >     {
	I0520 11:14:43.384871   40362 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 11:14:43.384881   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.384891   40362 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 11:14:43.384900   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384908   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.384947   40362 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 11:14:43.384960   40362 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 11:14:43.384965   40362 command_runner.go:130] >       ],
	I0520 11:14:43.384970   40362 command_runner.go:130] >       "size": "85933465",
	I0520 11:14:43.384975   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.384992   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.384998   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.385005   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.385010   40362 command_runner.go:130] >     },
	I0520 11:14:43.385016   40362 command_runner.go:130] >     {
	I0520 11:14:43.385026   40362 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 11:14:43.385034   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.385042   40362 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 11:14:43.385048   40362 command_runner.go:130] >       ],
	I0520 11:14:43.385055   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.385066   40362 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 11:14:43.385078   40362 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 11:14:43.385083   40362 command_runner.go:130] >       ],
	I0520 11:14:43.385090   40362 command_runner.go:130] >       "size": "63026504",
	I0520 11:14:43.385097   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.385108   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.385115   40362 command_runner.go:130] >       },
	I0520 11:14:43.385125   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.385131   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.385138   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.385146   40362 command_runner.go:130] >     },
	I0520 11:14:43.385153   40362 command_runner.go:130] >     {
	I0520 11:14:43.385167   40362 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 11:14:43.385180   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.385191   40362 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 11:14:43.385198   40362 command_runner.go:130] >       ],
	I0520 11:14:43.385207   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.385220   40362 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 11:14:43.385235   40362 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 11:14:43.385244   40362 command_runner.go:130] >       ],
	I0520 11:14:43.385252   40362 command_runner.go:130] >       "size": "750414",
	I0520 11:14:43.385263   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.385273   40362 command_runner.go:130] >         "value": "65535"
	I0520 11:14:43.385282   40362 command_runner.go:130] >       },
	I0520 11:14:43.385290   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.385297   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.385305   40362 command_runner.go:130] >       "pinned": true
	I0520 11:14:43.385311   40362 command_runner.go:130] >     }
	I0520 11:14:43.385318   40362 command_runner.go:130] >   ]
	I0520 11:14:43.385324   40362 command_runner.go:130] > }
	I0520 11:14:43.386127   40362 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:14:43.386146   40362 crio.go:433] Images already preloaded, skipping extraction
	I0520 11:14:43.386208   40362 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:14:43.419456   40362 command_runner.go:130] > {
	I0520 11:14:43.419475   40362 command_runner.go:130] >   "images": [
	I0520 11:14:43.419479   40362 command_runner.go:130] >     {
	I0520 11:14:43.419486   40362 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0520 11:14:43.419492   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.419497   40362 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0520 11:14:43.419502   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419506   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.419542   40362 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0520 11:14:43.419562   40362 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0520 11:14:43.419568   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419577   40362 command_runner.go:130] >       "size": "65291810",
	I0520 11:14:43.419581   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.419585   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.419590   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.419597   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.419602   40362 command_runner.go:130] >     },
	I0520 11:14:43.419606   40362 command_runner.go:130] >     {
	I0520 11:14:43.419615   40362 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0520 11:14:43.419626   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.419638   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0520 11:14:43.419648   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419657   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.419671   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0520 11:14:43.419683   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0520 11:14:43.419690   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419694   40362 command_runner.go:130] >       "size": "1363676",
	I0520 11:14:43.419703   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.419717   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.419727   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.419737   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.419745   40362 command_runner.go:130] >     },
	I0520 11:14:43.419753   40362 command_runner.go:130] >     {
	I0520 11:14:43.419767   40362 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0520 11:14:43.419776   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.419783   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0520 11:14:43.419792   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419802   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.419818   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0520 11:14:43.419834   40362 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0520 11:14:43.419843   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419853   40362 command_runner.go:130] >       "size": "31470524",
	I0520 11:14:43.419862   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.419870   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.419878   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.419885   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.419894   40362 command_runner.go:130] >     },
	I0520 11:14:43.419900   40362 command_runner.go:130] >     {
	I0520 11:14:43.419913   40362 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0520 11:14:43.419923   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.419934   40362 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0520 11:14:43.419943   40362 command_runner.go:130] >       ],
	I0520 11:14:43.419954   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.419966   40362 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0520 11:14:43.419983   40362 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0520 11:14:43.419993   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420001   40362 command_runner.go:130] >       "size": "61245718",
	I0520 11:14:43.420011   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.420021   40362 command_runner.go:130] >       "username": "nonroot",
	I0520 11:14:43.420030   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420040   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.420048   40362 command_runner.go:130] >     },
	I0520 11:14:43.420062   40362 command_runner.go:130] >     {
	I0520 11:14:43.420071   40362 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0520 11:14:43.420081   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.420092   40362 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0520 11:14:43.420101   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420111   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.420125   40362 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0520 11:14:43.420138   40362 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0520 11:14:43.420147   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420156   40362 command_runner.go:130] >       "size": "150779692",
	I0520 11:14:43.420163   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.420169   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.420177   40362 command_runner.go:130] >       },
	I0520 11:14:43.420188   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.420195   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420205   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.420213   40362 command_runner.go:130] >     },
	I0520 11:14:43.420220   40362 command_runner.go:130] >     {
	I0520 11:14:43.420232   40362 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0520 11:14:43.420247   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.420256   40362 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0520 11:14:43.420262   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420269   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.420285   40362 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0520 11:14:43.420300   40362 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0520 11:14:43.420310   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420321   40362 command_runner.go:130] >       "size": "117601759",
	I0520 11:14:43.420330   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.420339   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.420348   40362 command_runner.go:130] >       },
	I0520 11:14:43.420357   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.420364   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420369   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.420377   40362 command_runner.go:130] >     },
	I0520 11:14:43.420386   40362 command_runner.go:130] >     {
	I0520 11:14:43.420397   40362 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0520 11:14:43.420406   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.420418   40362 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0520 11:14:43.420426   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420436   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.420466   40362 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0520 11:14:43.420486   40362 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0520 11:14:43.420491   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420497   40362 command_runner.go:130] >       "size": "112170310",
	I0520 11:14:43.420510   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.420517   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.420526   40362 command_runner.go:130] >       },
	I0520 11:14:43.420533   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.420542   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420549   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.420557   40362 command_runner.go:130] >     },
	I0520 11:14:43.420562   40362 command_runner.go:130] >     {
	I0520 11:14:43.420573   40362 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0520 11:14:43.420579   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.420584   40362 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0520 11:14:43.420590   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420595   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.420611   40362 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0520 11:14:43.420625   40362 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0520 11:14:43.420631   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420642   40362 command_runner.go:130] >       "size": "85933465",
	I0520 11:14:43.420648   40362 command_runner.go:130] >       "uid": null,
	I0520 11:14:43.420659   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.420669   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420678   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.420687   40362 command_runner.go:130] >     },
	I0520 11:14:43.420695   40362 command_runner.go:130] >     {
	I0520 11:14:43.420709   40362 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0520 11:14:43.420718   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.420729   40362 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0520 11:14:43.420738   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420744   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.420758   40362 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0520 11:14:43.420770   40362 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0520 11:14:43.420776   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420780   40362 command_runner.go:130] >       "size": "63026504",
	I0520 11:14:43.420786   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.420790   40362 command_runner.go:130] >         "value": "0"
	I0520 11:14:43.420796   40362 command_runner.go:130] >       },
	I0520 11:14:43.420800   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.420805   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420812   40362 command_runner.go:130] >       "pinned": false
	I0520 11:14:43.420815   40362 command_runner.go:130] >     },
	I0520 11:14:43.420820   40362 command_runner.go:130] >     {
	I0520 11:14:43.420825   40362 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0520 11:14:43.420831   40362 command_runner.go:130] >       "repoTags": [
	I0520 11:14:43.420836   40362 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0520 11:14:43.420842   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420846   40362 command_runner.go:130] >       "repoDigests": [
	I0520 11:14:43.420855   40362 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0520 11:14:43.420864   40362 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0520 11:14:43.420870   40362 command_runner.go:130] >       ],
	I0520 11:14:43.420874   40362 command_runner.go:130] >       "size": "750414",
	I0520 11:14:43.420880   40362 command_runner.go:130] >       "uid": {
	I0520 11:14:43.420884   40362 command_runner.go:130] >         "value": "65535"
	I0520 11:14:43.420891   40362 command_runner.go:130] >       },
	I0520 11:14:43.420895   40362 command_runner.go:130] >       "username": "",
	I0520 11:14:43.420901   40362 command_runner.go:130] >       "spec": null,
	I0520 11:14:43.420912   40362 command_runner.go:130] >       "pinned": true
	I0520 11:14:43.420918   40362 command_runner.go:130] >     }
	I0520 11:14:43.420921   40362 command_runner.go:130] >   ]
	I0520 11:14:43.420943   40362 command_runner.go:130] > }
	I0520 11:14:43.421071   40362 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:14:43.421081   40362 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:14:43.421089   40362 kubeadm.go:928] updating node { 192.168.39.145 8443 v1.30.1 crio true true} ...
	I0520 11:14:43.421189   40362 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-563238 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-563238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:14:43.421250   40362 ssh_runner.go:195] Run: crio config
	I0520 11:14:43.453637   40362 command_runner.go:130] ! time="2024-05-20 11:14:43.434932160Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0520 11:14:43.460085   40362 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0520 11:14:43.464280   40362 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0520 11:14:43.464314   40362 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0520 11:14:43.464325   40362 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0520 11:14:43.464330   40362 command_runner.go:130] > #
	I0520 11:14:43.464340   40362 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0520 11:14:43.464352   40362 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0520 11:14:43.464363   40362 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0520 11:14:43.464383   40362 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0520 11:14:43.464390   40362 command_runner.go:130] > # reload'.
	I0520 11:14:43.464401   40362 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0520 11:14:43.464415   40362 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0520 11:14:43.464427   40362 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0520 11:14:43.464441   40362 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0520 11:14:43.464447   40362 command_runner.go:130] > [crio]
	I0520 11:14:43.464458   40362 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0520 11:14:43.464472   40362 command_runner.go:130] > # containers images, in this directory.
	I0520 11:14:43.464480   40362 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0520 11:14:43.464496   40362 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0520 11:14:43.464506   40362 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0520 11:14:43.464520   40362 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0520 11:14:43.464530   40362 command_runner.go:130] > # imagestore = ""
	I0520 11:14:43.464542   40362 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0520 11:14:43.464556   40362 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0520 11:14:43.464567   40362 command_runner.go:130] > storage_driver = "overlay"
	I0520 11:14:43.464577   40362 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0520 11:14:43.464590   40362 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0520 11:14:43.464599   40362 command_runner.go:130] > storage_option = [
	I0520 11:14:43.464610   40362 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0520 11:14:43.464618   40362 command_runner.go:130] > ]
	I0520 11:14:43.464630   40362 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0520 11:14:43.464643   40362 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0520 11:14:43.464652   40362 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0520 11:14:43.464662   40362 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0520 11:14:43.464676   40362 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0520 11:14:43.464686   40362 command_runner.go:130] > # always happen on a node reboot
	I0520 11:14:43.464695   40362 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0520 11:14:43.464711   40362 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0520 11:14:43.464725   40362 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0520 11:14:43.464736   40362 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0520 11:14:43.464745   40362 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0520 11:14:43.464761   40362 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0520 11:14:43.464777   40362 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0520 11:14:43.464787   40362 command_runner.go:130] > # internal_wipe = true
	I0520 11:14:43.464801   40362 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0520 11:14:43.464813   40362 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0520 11:14:43.464827   40362 command_runner.go:130] > # internal_repair = false
	I0520 11:14:43.464839   40362 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0520 11:14:43.464852   40362 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0520 11:14:43.464865   40362 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0520 11:14:43.464876   40362 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0520 11:14:43.464890   40362 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0520 11:14:43.464899   40362 command_runner.go:130] > [crio.api]
	I0520 11:14:43.464908   40362 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0520 11:14:43.464919   40362 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0520 11:14:43.464942   40362 command_runner.go:130] > # IP address on which the stream server will listen.
	I0520 11:14:43.464953   40362 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0520 11:14:43.464967   40362 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0520 11:14:43.464998   40362 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0520 11:14:43.465008   40362 command_runner.go:130] > # stream_port = "0"
	I0520 11:14:43.465017   40362 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0520 11:14:43.465025   40362 command_runner.go:130] > # stream_enable_tls = false
	I0520 11:14:43.465036   40362 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0520 11:14:43.465046   40362 command_runner.go:130] > # stream_idle_timeout = ""
	I0520 11:14:43.465059   40362 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0520 11:14:43.465072   40362 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0520 11:14:43.465081   40362 command_runner.go:130] > # minutes.
	I0520 11:14:43.465088   40362 command_runner.go:130] > # stream_tls_cert = ""
	I0520 11:14:43.465100   40362 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0520 11:14:43.465111   40362 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0520 11:14:43.465120   40362 command_runner.go:130] > # stream_tls_key = ""
	I0520 11:14:43.465133   40362 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0520 11:14:43.465146   40362 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0520 11:14:43.465168   40362 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0520 11:14:43.465178   40362 command_runner.go:130] > # stream_tls_ca = ""
	I0520 11:14:43.465192   40362 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 11:14:43.465203   40362 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0520 11:14:43.465216   40362 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0520 11:14:43.465227   40362 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0520 11:14:43.465240   40362 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0520 11:14:43.465253   40362 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0520 11:14:43.465263   40362 command_runner.go:130] > [crio.runtime]
	I0520 11:14:43.465275   40362 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0520 11:14:43.465285   40362 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0520 11:14:43.465294   40362 command_runner.go:130] > # "nofile=1024:2048"
	I0520 11:14:43.465306   40362 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0520 11:14:43.465316   40362 command_runner.go:130] > # default_ulimits = [
	I0520 11:14:43.465323   40362 command_runner.go:130] > # ]
	I0520 11:14:43.465334   40362 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0520 11:14:43.465343   40362 command_runner.go:130] > # no_pivot = false
	I0520 11:14:43.465355   40362 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0520 11:14:43.465368   40362 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0520 11:14:43.465376   40362 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0520 11:14:43.465389   40362 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0520 11:14:43.465402   40362 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0520 11:14:43.465416   40362 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 11:14:43.465427   40362 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0520 11:14:43.465438   40362 command_runner.go:130] > # Cgroup setting for conmon
	I0520 11:14:43.465453   40362 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0520 11:14:43.465462   40362 command_runner.go:130] > conmon_cgroup = "pod"
	I0520 11:14:43.465473   40362 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0520 11:14:43.465485   40362 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0520 11:14:43.465499   40362 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0520 11:14:43.465508   40362 command_runner.go:130] > conmon_env = [
	I0520 11:14:43.465519   40362 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 11:14:43.465527   40362 command_runner.go:130] > ]
	I0520 11:14:43.465537   40362 command_runner.go:130] > # Additional environment variables to set for all the
	I0520 11:14:43.465548   40362 command_runner.go:130] > # containers. These are overridden if set in the
	I0520 11:14:43.465563   40362 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0520 11:14:43.465572   40362 command_runner.go:130] > # default_env = [
	I0520 11:14:43.465578   40362 command_runner.go:130] > # ]
	I0520 11:14:43.465591   40362 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0520 11:14:43.465607   40362 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0520 11:14:43.465616   40362 command_runner.go:130] > # selinux = false
	I0520 11:14:43.465627   40362 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0520 11:14:43.465640   40362 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0520 11:14:43.465653   40362 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0520 11:14:43.465663   40362 command_runner.go:130] > # seccomp_profile = ""
	I0520 11:14:43.465676   40362 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0520 11:14:43.465688   40362 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0520 11:14:43.465699   40362 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0520 11:14:43.465709   40362 command_runner.go:130] > # which might increase security.
	I0520 11:14:43.465717   40362 command_runner.go:130] > # This option is currently deprecated,
	I0520 11:14:43.465730   40362 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0520 11:14:43.465741   40362 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0520 11:14:43.465754   40362 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0520 11:14:43.465768   40362 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0520 11:14:43.465782   40362 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0520 11:14:43.465795   40362 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0520 11:14:43.465806   40362 command_runner.go:130] > # This option supports live configuration reload.
	I0520 11:14:43.465818   40362 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0520 11:14:43.465836   40362 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0520 11:14:43.465847   40362 command_runner.go:130] > # the cgroup blockio controller.
	I0520 11:14:43.465855   40362 command_runner.go:130] > # blockio_config_file = ""
	I0520 11:14:43.465870   40362 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0520 11:14:43.465879   40362 command_runner.go:130] > # blockio parameters.
	I0520 11:14:43.465886   40362 command_runner.go:130] > # blockio_reload = false
	I0520 11:14:43.465900   40362 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0520 11:14:43.465909   40362 command_runner.go:130] > # irqbalance daemon.
	I0520 11:14:43.465919   40362 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0520 11:14:43.465932   40362 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0520 11:14:43.465947   40362 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0520 11:14:43.465961   40362 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0520 11:14:43.465974   40362 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0520 11:14:43.465986   40362 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0520 11:14:43.465998   40362 command_runner.go:130] > # This option supports live configuration reload.
	I0520 11:14:43.466009   40362 command_runner.go:130] > # rdt_config_file = ""
	I0520 11:14:43.466021   40362 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0520 11:14:43.466032   40362 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0520 11:14:43.466057   40362 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0520 11:14:43.466067   40362 command_runner.go:130] > # separate_pull_cgroup = ""
	I0520 11:14:43.466077   40362 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0520 11:14:43.466091   40362 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0520 11:14:43.466099   40362 command_runner.go:130] > # will be added.
	I0520 11:14:43.466107   40362 command_runner.go:130] > # default_capabilities = [
	I0520 11:14:43.466116   40362 command_runner.go:130] > # 	"CHOWN",
	I0520 11:14:43.466124   40362 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0520 11:14:43.466133   40362 command_runner.go:130] > # 	"FSETID",
	I0520 11:14:43.466141   40362 command_runner.go:130] > # 	"FOWNER",
	I0520 11:14:43.466149   40362 command_runner.go:130] > # 	"SETGID",
	I0520 11:14:43.466156   40362 command_runner.go:130] > # 	"SETUID",
	I0520 11:14:43.466163   40362 command_runner.go:130] > # 	"SETPCAP",
	I0520 11:14:43.466172   40362 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0520 11:14:43.466181   40362 command_runner.go:130] > # 	"KILL",
	I0520 11:14:43.466187   40362 command_runner.go:130] > # ]
	I0520 11:14:43.466201   40362 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0520 11:14:43.466216   40362 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0520 11:14:43.466228   40362 command_runner.go:130] > # add_inheritable_capabilities = false
	I0520 11:14:43.466241   40362 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0520 11:14:43.466253   40362 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 11:14:43.466263   40362 command_runner.go:130] > default_sysctls = [
	I0520 11:14:43.466272   40362 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0520 11:14:43.466281   40362 command_runner.go:130] > ]
	I0520 11:14:43.466289   40362 command_runner.go:130] > # List of devices on the host that a
	I0520 11:14:43.466303   40362 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0520 11:14:43.466313   40362 command_runner.go:130] > # allowed_devices = [
	I0520 11:14:43.466321   40362 command_runner.go:130] > # 	"/dev/fuse",
	I0520 11:14:43.466327   40362 command_runner.go:130] > # ]
	I0520 11:14:43.466337   40362 command_runner.go:130] > # List of additional devices. specified as
	I0520 11:14:43.466352   40362 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0520 11:14:43.466364   40362 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0520 11:14:43.466377   40362 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0520 11:14:43.466387   40362 command_runner.go:130] > # additional_devices = [
	I0520 11:14:43.466396   40362 command_runner.go:130] > # ]
	I0520 11:14:43.466405   40362 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0520 11:14:43.466414   40362 command_runner.go:130] > # cdi_spec_dirs = [
	I0520 11:14:43.466421   40362 command_runner.go:130] > # 	"/etc/cdi",
	I0520 11:14:43.466431   40362 command_runner.go:130] > # 	"/var/run/cdi",
	I0520 11:14:43.466437   40362 command_runner.go:130] > # ]
	I0520 11:14:43.466449   40362 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0520 11:14:43.466462   40362 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0520 11:14:43.466472   40362 command_runner.go:130] > # Defaults to false.
	I0520 11:14:43.466484   40362 command_runner.go:130] > # device_ownership_from_security_context = false
	I0520 11:14:43.466497   40362 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0520 11:14:43.466509   40362 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0520 11:14:43.466516   40362 command_runner.go:130] > # hooks_dir = [
	I0520 11:14:43.466527   40362 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0520 11:14:43.466534   40362 command_runner.go:130] > # ]
	I0520 11:14:43.466545   40362 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0520 11:14:43.466558   40362 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0520 11:14:43.466569   40362 command_runner.go:130] > # its default mounts from the following two files:
	I0520 11:14:43.466578   40362 command_runner.go:130] > #
	I0520 11:14:43.466590   40362 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0520 11:14:43.466603   40362 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0520 11:14:43.466613   40362 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0520 11:14:43.466621   40362 command_runner.go:130] > #
	I0520 11:14:43.466632   40362 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0520 11:14:43.466646   40362 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0520 11:14:43.466660   40362 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0520 11:14:43.466672   40362 command_runner.go:130] > #      only add mounts it finds in this file.
	I0520 11:14:43.466680   40362 command_runner.go:130] > #
	I0520 11:14:43.466688   40362 command_runner.go:130] > # default_mounts_file = ""
	I0520 11:14:43.466700   40362 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0520 11:14:43.466714   40362 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0520 11:14:43.466722   40362 command_runner.go:130] > pids_limit = 1024
	I0520 11:14:43.466733   40362 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0520 11:14:43.466746   40362 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0520 11:14:43.466760   40362 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0520 11:14:43.466776   40362 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0520 11:14:43.466786   40362 command_runner.go:130] > # log_size_max = -1
	I0520 11:14:43.466801   40362 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0520 11:14:43.466811   40362 command_runner.go:130] > # log_to_journald = false
	I0520 11:14:43.466829   40362 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0520 11:14:43.466840   40362 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0520 11:14:43.466849   40362 command_runner.go:130] > # Path to directory for container attach sockets.
	I0520 11:14:43.466860   40362 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0520 11:14:43.466870   40362 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0520 11:14:43.466880   40362 command_runner.go:130] > # bind_mount_prefix = ""
	I0520 11:14:43.466893   40362 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0520 11:14:43.466902   40362 command_runner.go:130] > # read_only = false
	I0520 11:14:43.466913   40362 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0520 11:14:43.466926   40362 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0520 11:14:43.466937   40362 command_runner.go:130] > # live configuration reload.
	I0520 11:14:43.466944   40362 command_runner.go:130] > # log_level = "info"
	I0520 11:14:43.466955   40362 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0520 11:14:43.466967   40362 command_runner.go:130] > # This option supports live configuration reload.
	I0520 11:14:43.466980   40362 command_runner.go:130] > # log_filter = ""
	I0520 11:14:43.466993   40362 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0520 11:14:43.467009   40362 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0520 11:14:43.467018   40362 command_runner.go:130] > # separated by comma.
	I0520 11:14:43.467031   40362 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 11:14:43.467041   40362 command_runner.go:130] > # uid_mappings = ""
	I0520 11:14:43.467052   40362 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0520 11:14:43.467065   40362 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0520 11:14:43.467075   40362 command_runner.go:130] > # separated by comma.
	I0520 11:14:43.467088   40362 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 11:14:43.467096   40362 command_runner.go:130] > # gid_mappings = ""
	I0520 11:14:43.467106   40362 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0520 11:14:43.467120   40362 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 11:14:43.467133   40362 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 11:14:43.467148   40362 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 11:14:43.467159   40362 command_runner.go:130] > # minimum_mappable_uid = -1
	I0520 11:14:43.467173   40362 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0520 11:14:43.467184   40362 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0520 11:14:43.467194   40362 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0520 11:14:43.467207   40362 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0520 11:14:43.467217   40362 command_runner.go:130] > # minimum_mappable_gid = -1
	I0520 11:14:43.467231   40362 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0520 11:14:43.467245   40362 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0520 11:14:43.467257   40362 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0520 11:14:43.467267   40362 command_runner.go:130] > # ctr_stop_timeout = 30
	I0520 11:14:43.467278   40362 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0520 11:14:43.467291   40362 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0520 11:14:43.467303   40362 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0520 11:14:43.467312   40362 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0520 11:14:43.467321   40362 command_runner.go:130] > drop_infra_ctr = false
	I0520 11:14:43.467334   40362 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0520 11:14:43.467346   40362 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0520 11:14:43.467363   40362 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0520 11:14:43.467373   40362 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0520 11:14:43.467388   40362 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0520 11:14:43.467400   40362 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0520 11:14:43.467414   40362 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0520 11:14:43.467425   40362 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0520 11:14:43.467436   40362 command_runner.go:130] > # shared_cpuset = ""
	I0520 11:14:43.467449   40362 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0520 11:14:43.467461   40362 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0520 11:14:43.467471   40362 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0520 11:14:43.467486   40362 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0520 11:14:43.467497   40362 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0520 11:14:43.467507   40362 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0520 11:14:43.467520   40362 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0520 11:14:43.467530   40362 command_runner.go:130] > # enable_criu_support = false
	I0520 11:14:43.467542   40362 command_runner.go:130] > # Enable/disable the generation of the container,
	I0520 11:14:43.467555   40362 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0520 11:14:43.467566   40362 command_runner.go:130] > # enable_pod_events = false
	I0520 11:14:43.467580   40362 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 11:14:43.467593   40362 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0520 11:14:43.467604   40362 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0520 11:14:43.467612   40362 command_runner.go:130] > # default_runtime = "runc"
	I0520 11:14:43.467623   40362 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0520 11:14:43.467638   40362 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0520 11:14:43.467656   40362 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0520 11:14:43.467668   40362 command_runner.go:130] > # creation as a file is not desired either.
	I0520 11:14:43.467685   40362 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0520 11:14:43.467695   40362 command_runner.go:130] > # the hostname is being managed dynamically.
	I0520 11:14:43.467702   40362 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0520 11:14:43.467711   40362 command_runner.go:130] > # ]
	I0520 11:14:43.467722   40362 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0520 11:14:43.467736   40362 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0520 11:14:43.467749   40362 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0520 11:14:43.467761   40362 command_runner.go:130] > # Each entry in the table should follow the format:
	I0520 11:14:43.467770   40362 command_runner.go:130] > #
	I0520 11:14:43.467779   40362 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0520 11:14:43.467789   40362 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0520 11:14:43.467812   40362 command_runner.go:130] > # runtime_type = "oci"
	I0520 11:14:43.467827   40362 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0520 11:14:43.467839   40362 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0520 11:14:43.467850   40362 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0520 11:14:43.467861   40362 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0520 11:14:43.467873   40362 command_runner.go:130] > # monitor_env = []
	I0520 11:14:43.467884   40362 command_runner.go:130] > # privileged_without_host_devices = false
	I0520 11:14:43.467892   40362 command_runner.go:130] > # allowed_annotations = []
	I0520 11:14:43.467905   40362 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0520 11:14:43.467914   40362 command_runner.go:130] > # Where:
	I0520 11:14:43.467924   40362 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0520 11:14:43.467938   40362 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0520 11:14:43.467952   40362 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0520 11:14:43.467966   40362 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0520 11:14:43.467975   40362 command_runner.go:130] > #   in $PATH.
	I0520 11:14:43.467987   40362 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0520 11:14:43.467998   40362 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0520 11:14:43.468010   40362 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0520 11:14:43.468018   40362 command_runner.go:130] > #   state.
	I0520 11:14:43.468030   40362 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0520 11:14:43.468042   40362 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0520 11:14:43.468054   40362 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0520 11:14:43.468066   40362 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0520 11:14:43.468079   40362 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0520 11:14:43.468094   40362 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0520 11:14:43.468105   40362 command_runner.go:130] > #   The currently recognized values are:
	I0520 11:14:43.468117   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0520 11:14:43.468132   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0520 11:14:43.468145   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0520 11:14:43.468158   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0520 11:14:43.468173   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0520 11:14:43.468187   40362 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0520 11:14:43.468201   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0520 11:14:43.468215   40362 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0520 11:14:43.468228   40362 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0520 11:14:43.468241   40362 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0520 11:14:43.468252   40362 command_runner.go:130] > #   deprecated option "conmon".
	I0520 11:14:43.468264   40362 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0520 11:14:43.468275   40362 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0520 11:14:43.468289   40362 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0520 11:14:43.468300   40362 command_runner.go:130] > #   should be moved to the container's cgroup
	I0520 11:14:43.468312   40362 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0520 11:14:43.468324   40362 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0520 11:14:43.468338   40362 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0520 11:14:43.468350   40362 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0520 11:14:43.468358   40362 command_runner.go:130] > #
	I0520 11:14:43.468366   40362 command_runner.go:130] > # Using the seccomp notifier feature:
	I0520 11:14:43.468374   40362 command_runner.go:130] > #
	I0520 11:14:43.468385   40362 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0520 11:14:43.468398   40362 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0520 11:14:43.468406   40362 command_runner.go:130] > #
	I0520 11:14:43.468417   40362 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0520 11:14:43.468431   40362 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0520 11:14:43.468440   40362 command_runner.go:130] > #
	I0520 11:14:43.468450   40362 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0520 11:14:43.468459   40362 command_runner.go:130] > # feature.
	I0520 11:14:43.468465   40362 command_runner.go:130] > #
	I0520 11:14:43.468476   40362 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0520 11:14:43.468490   40362 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0520 11:14:43.468504   40362 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0520 11:14:43.468517   40362 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0520 11:14:43.468530   40362 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0520 11:14:43.468538   40362 command_runner.go:130] > #
	I0520 11:14:43.468549   40362 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0520 11:14:43.468562   40362 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0520 11:14:43.468570   40362 command_runner.go:130] > #
	I0520 11:14:43.468582   40362 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0520 11:14:43.468595   40362 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0520 11:14:43.468603   40362 command_runner.go:130] > #
	I0520 11:14:43.468613   40362 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0520 11:14:43.468625   40362 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0520 11:14:43.468632   40362 command_runner.go:130] > # limitation.
	I0520 11:14:43.468643   40362 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0520 11:14:43.468654   40362 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0520 11:14:43.468663   40362 command_runner.go:130] > runtime_type = "oci"
	I0520 11:14:43.468671   40362 command_runner.go:130] > runtime_root = "/run/runc"
	I0520 11:14:43.468682   40362 command_runner.go:130] > runtime_config_path = ""
	I0520 11:14:43.468693   40362 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0520 11:14:43.468704   40362 command_runner.go:130] > monitor_cgroup = "pod"
	I0520 11:14:43.468713   40362 command_runner.go:130] > monitor_exec_cgroup = ""
	I0520 11:14:43.468722   40362 command_runner.go:130] > monitor_env = [
	I0520 11:14:43.468732   40362 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0520 11:14:43.468740   40362 command_runner.go:130] > ]
	I0520 11:14:43.468749   40362 command_runner.go:130] > privileged_without_host_devices = false
	I0520 11:14:43.468764   40362 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0520 11:14:43.468777   40362 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0520 11:14:43.468790   40362 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0520 11:14:43.468803   40362 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0520 11:14:43.468819   40362 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0520 11:14:43.468834   40362 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0520 11:14:43.468852   40362 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0520 11:14:43.468868   40362 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0520 11:14:43.468880   40362 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0520 11:14:43.468894   40362 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0520 11:14:43.468903   40362 command_runner.go:130] > # Example:
	I0520 11:14:43.468912   40362 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0520 11:14:43.468923   40362 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0520 11:14:43.468948   40362 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0520 11:14:43.468957   40362 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0520 11:14:43.468967   40362 command_runner.go:130] > # cpuset = 0
	I0520 11:14:43.468974   40362 command_runner.go:130] > # cpushares = "0-1"
	I0520 11:14:43.468983   40362 command_runner.go:130] > # Where:
	I0520 11:14:43.468991   40362 command_runner.go:130] > # The workload name is workload-type.
	I0520 11:14:43.469007   40362 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0520 11:14:43.469019   40362 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0520 11:14:43.469031   40362 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0520 11:14:43.469045   40362 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0520 11:14:43.469058   40362 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0520 11:14:43.469069   40362 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0520 11:14:43.469084   40362 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0520 11:14:43.469095   40362 command_runner.go:130] > # Default value is set to true
	I0520 11:14:43.469106   40362 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0520 11:14:43.469113   40362 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0520 11:14:43.469120   40362 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0520 11:14:43.469126   40362 command_runner.go:130] > # Default value is set to 'false'
	I0520 11:14:43.469133   40362 command_runner.go:130] > # disable_hostport_mapping = false
	I0520 11:14:43.469143   40362 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0520 11:14:43.469148   40362 command_runner.go:130] > #
	I0520 11:14:43.469158   40362 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0520 11:14:43.469167   40362 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0520 11:14:43.469177   40362 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0520 11:14:43.469187   40362 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0520 11:14:43.469196   40362 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0520 11:14:43.469202   40362 command_runner.go:130] > [crio.image]
	I0520 11:14:43.469212   40362 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0520 11:14:43.469219   40362 command_runner.go:130] > # default_transport = "docker://"
	I0520 11:14:43.469229   40362 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0520 11:14:43.469239   40362 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0520 11:14:43.469246   40362 command_runner.go:130] > # global_auth_file = ""
	I0520 11:14:43.469255   40362 command_runner.go:130] > # The image used to instantiate infra containers.
	I0520 11:14:43.469263   40362 command_runner.go:130] > # This option supports live configuration reload.
	I0520 11:14:43.469272   40362 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0520 11:14:43.469283   40362 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0520 11:14:43.469293   40362 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0520 11:14:43.469301   40362 command_runner.go:130] > # This option supports live configuration reload.
	I0520 11:14:43.469308   40362 command_runner.go:130] > # pause_image_auth_file = ""
	I0520 11:14:43.469317   40362 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0520 11:14:43.469331   40362 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0520 11:14:43.469344   40362 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0520 11:14:43.469355   40362 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0520 11:14:43.469364   40362 command_runner.go:130] > # pause_command = "/pause"
	I0520 11:14:43.469376   40362 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0520 11:14:43.469390   40362 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0520 11:14:43.469403   40362 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0520 11:14:43.469420   40362 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0520 11:14:43.469433   40362 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0520 11:14:43.469445   40362 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0520 11:14:43.469454   40362 command_runner.go:130] > # pinned_images = [
	I0520 11:14:43.469462   40362 command_runner.go:130] > # ]
	I0520 11:14:43.469478   40362 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0520 11:14:43.469492   40362 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0520 11:14:43.469505   40362 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0520 11:14:43.469518   40362 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0520 11:14:43.469527   40362 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0520 11:14:43.469536   40362 command_runner.go:130] > # signature_policy = ""
	I0520 11:14:43.469546   40362 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0520 11:14:43.469560   40362 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0520 11:14:43.469574   40362 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0520 11:14:43.469587   40362 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0520 11:14:43.469600   40362 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0520 11:14:43.469612   40362 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0520 11:14:43.469626   40362 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0520 11:14:43.469639   40362 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0520 11:14:43.469648   40362 command_runner.go:130] > # changing them here.
	I0520 11:14:43.469657   40362 command_runner.go:130] > # insecure_registries = [
	I0520 11:14:43.469665   40362 command_runner.go:130] > # ]
	I0520 11:14:43.469677   40362 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0520 11:14:43.469688   40362 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0520 11:14:43.469695   40362 command_runner.go:130] > # image_volumes = "mkdir"
	I0520 11:14:43.469707   40362 command_runner.go:130] > # Temporary directory to use for storing big files
	I0520 11:14:43.469718   40362 command_runner.go:130] > # big_files_temporary_dir = ""
	I0520 11:14:43.469731   40362 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0520 11:14:43.469740   40362 command_runner.go:130] > # CNI plugins.
	I0520 11:14:43.469749   40362 command_runner.go:130] > [crio.network]
	I0520 11:14:43.469762   40362 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0520 11:14:43.469773   40362 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0520 11:14:43.469780   40362 command_runner.go:130] > # cni_default_network = ""
	I0520 11:14:43.469792   40362 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0520 11:14:43.469803   40362 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0520 11:14:43.469812   40362 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0520 11:14:43.469827   40362 command_runner.go:130] > # plugin_dirs = [
	I0520 11:14:43.469838   40362 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0520 11:14:43.469846   40362 command_runner.go:130] > # ]
	I0520 11:14:43.469857   40362 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0520 11:14:43.469866   40362 command_runner.go:130] > [crio.metrics]
	I0520 11:14:43.469875   40362 command_runner.go:130] > # Globally enable or disable metrics support.
	I0520 11:14:43.469885   40362 command_runner.go:130] > enable_metrics = true
	I0520 11:14:43.469894   40362 command_runner.go:130] > # Specify enabled metrics collectors.
	I0520 11:14:43.469905   40362 command_runner.go:130] > # Per default all metrics are enabled.
	I0520 11:14:43.469920   40362 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0520 11:14:43.469934   40362 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0520 11:14:43.469947   40362 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0520 11:14:43.469957   40362 command_runner.go:130] > # metrics_collectors = [
	I0520 11:14:43.469963   40362 command_runner.go:130] > # 	"operations",
	I0520 11:14:43.469975   40362 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0520 11:14:43.469986   40362 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0520 11:14:43.469994   40362 command_runner.go:130] > # 	"operations_errors",
	I0520 11:14:43.470004   40362 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0520 11:14:43.470014   40362 command_runner.go:130] > # 	"image_pulls_by_name",
	I0520 11:14:43.470024   40362 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0520 11:14:43.470033   40362 command_runner.go:130] > # 	"image_pulls_failures",
	I0520 11:14:43.470042   40362 command_runner.go:130] > # 	"image_pulls_successes",
	I0520 11:14:43.470050   40362 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0520 11:14:43.470060   40362 command_runner.go:130] > # 	"image_layer_reuse",
	I0520 11:14:43.470071   40362 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0520 11:14:43.470079   40362 command_runner.go:130] > # 	"containers_oom_total",
	I0520 11:14:43.470087   40362 command_runner.go:130] > # 	"containers_oom",
	I0520 11:14:43.470095   40362 command_runner.go:130] > # 	"processes_defunct",
	I0520 11:14:43.470105   40362 command_runner.go:130] > # 	"operations_total",
	I0520 11:14:43.470114   40362 command_runner.go:130] > # 	"operations_latency_seconds",
	I0520 11:14:43.470124   40362 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0520 11:14:43.470131   40362 command_runner.go:130] > # 	"operations_errors_total",
	I0520 11:14:43.470142   40362 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0520 11:14:43.470150   40362 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0520 11:14:43.470161   40362 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0520 11:14:43.470172   40362 command_runner.go:130] > # 	"image_pulls_success_total",
	I0520 11:14:43.470182   40362 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0520 11:14:43.470193   40362 command_runner.go:130] > # 	"containers_oom_count_total",
	I0520 11:14:43.470203   40362 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0520 11:14:43.470212   40362 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0520 11:14:43.470218   40362 command_runner.go:130] > # ]
	I0520 11:14:43.470231   40362 command_runner.go:130] > # The port on which the metrics server will listen.
	I0520 11:14:43.470241   40362 command_runner.go:130] > # metrics_port = 9090
	I0520 11:14:43.470253   40362 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0520 11:14:43.470264   40362 command_runner.go:130] > # metrics_socket = ""
	I0520 11:14:43.470275   40362 command_runner.go:130] > # The certificate for the secure metrics server.
	I0520 11:14:43.470288   40362 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0520 11:14:43.470301   40362 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0520 11:14:43.470310   40362 command_runner.go:130] > # certificate on any modification event.
	I0520 11:14:43.470319   40362 command_runner.go:130] > # metrics_cert = ""
	I0520 11:14:43.470331   40362 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0520 11:14:43.470343   40362 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0520 11:14:43.470370   40362 command_runner.go:130] > # metrics_key = ""
	I0520 11:14:43.470387   40362 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0520 11:14:43.470395   40362 command_runner.go:130] > [crio.tracing]
	I0520 11:14:43.470406   40362 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0520 11:14:43.470416   40362 command_runner.go:130] > # enable_tracing = false
	I0520 11:14:43.470426   40362 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0520 11:14:43.470437   40362 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0520 11:14:43.470452   40362 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0520 11:14:43.470463   40362 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0520 11:14:43.470470   40362 command_runner.go:130] > # CRI-O NRI configuration.
	I0520 11:14:43.470476   40362 command_runner.go:130] > [crio.nri]
	I0520 11:14:43.470487   40362 command_runner.go:130] > # Globally enable or disable NRI.
	I0520 11:14:43.470498   40362 command_runner.go:130] > # enable_nri = false
	I0520 11:14:43.470509   40362 command_runner.go:130] > # NRI socket to listen on.
	I0520 11:14:43.470519   40362 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0520 11:14:43.470529   40362 command_runner.go:130] > # NRI plugin directory to use.
	I0520 11:14:43.470538   40362 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0520 11:14:43.470549   40362 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0520 11:14:43.470559   40362 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0520 11:14:43.470569   40362 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0520 11:14:43.470579   40362 command_runner.go:130] > # nri_disable_connections = false
	I0520 11:14:43.470591   40362 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0520 11:14:43.470602   40362 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0520 11:14:43.470612   40362 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0520 11:14:43.470623   40362 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0520 11:14:43.470635   40362 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0520 11:14:43.470644   40362 command_runner.go:130] > [crio.stats]
	I0520 11:14:43.470654   40362 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0520 11:14:43.470666   40362 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0520 11:14:43.470677   40362 command_runner.go:130] > # stats_collection_period = 0
	I0520 11:14:43.470784   40362 cni.go:84] Creating CNI manager for ""
	I0520 11:14:43.470795   40362 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0520 11:14:43.470816   40362 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:14:43.470861   40362 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.145 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-563238 NodeName:multinode-563238 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:14:43.471017   40362 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-563238"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:14:43.471095   40362 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:14:43.481399   40362 command_runner.go:130] > kubeadm
	I0520 11:14:43.481418   40362 command_runner.go:130] > kubectl
	I0520 11:14:43.481422   40362 command_runner.go:130] > kubelet
	I0520 11:14:43.481439   40362 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:14:43.481481   40362 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:14:43.491181   40362 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0520 11:14:43.508199   40362 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:14:43.525975   40362 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0520 11:14:43.543034   40362 ssh_runner.go:195] Run: grep 192.168.39.145	control-plane.minikube.internal$ /etc/hosts
	I0520 11:14:43.546837   40362 command_runner.go:130] > 192.168.39.145	control-plane.minikube.internal
	I0520 11:14:43.547053   40362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:14:43.684078   40362 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:14:43.698917   40362 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238 for IP: 192.168.39.145
	I0520 11:14:43.698933   40362 certs.go:194] generating shared ca certs ...
	I0520 11:14:43.698946   40362 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:14:43.699086   40362 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:14:43.699131   40362 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:14:43.699147   40362 certs.go:256] generating profile certs ...
	I0520 11:14:43.699229   40362 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/client.key
	I0520 11:14:43.699284   40362 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/apiserver.key.3a25a639
	I0520 11:14:43.699317   40362 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/proxy-client.key
	I0520 11:14:43.699327   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0520 11:14:43.699338   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0520 11:14:43.699350   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0520 11:14:43.699360   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0520 11:14:43.699371   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0520 11:14:43.699382   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0520 11:14:43.699395   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0520 11:14:43.699407   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0520 11:14:43.699455   40362 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:14:43.699480   40362 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:14:43.699491   40362 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:14:43.699512   40362 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:14:43.699532   40362 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:14:43.699552   40362 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:14:43.699589   40362 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:14:43.699612   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> /usr/share/ca-certificates/111622.pem
	I0520 11:14:43.699625   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:43.699637   40362 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem -> /usr/share/ca-certificates/11162.pem
	I0520 11:14:43.700175   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:14:43.725101   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:14:43.748352   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:14:43.771646   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:14:43.795061   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 11:14:43.818572   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 11:14:43.842377   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:14:43.866164   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/multinode-563238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:14:43.890072   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:14:43.913034   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:14:43.937294   40362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:14:43.960645   40362 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:14:43.976966   40362 ssh_runner.go:195] Run: openssl version
	I0520 11:14:43.982559   40362 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0520 11:14:43.982620   40362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:14:43.993355   40362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:43.997532   40362 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:43.997555   40362 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:43.997590   40362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:44.003123   40362 command_runner.go:130] > b5213941
	I0520 11:14:44.003185   40362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:14:44.012371   40362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:14:44.023068   40362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:14:44.027287   40362 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:14:44.027372   40362 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:14:44.027411   40362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:14:44.032717   40362 command_runner.go:130] > 51391683
	I0520 11:14:44.032935   40362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:14:44.042227   40362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:14:44.052948   40362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:14:44.057171   40362 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:14:44.057200   40362 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:14:44.057236   40362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:14:44.062434   40362 command_runner.go:130] > 3ec20f2e
	I0520 11:14:44.062636   40362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:14:44.072113   40362 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:14:44.076448   40362 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:14:44.076469   40362 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0520 11:14:44.076475   40362 command_runner.go:130] > Device: 253,1	Inode: 6292502     Links: 1
	I0520 11:14:44.076482   40362 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0520 11:14:44.076490   40362 command_runner.go:130] > Access: 2024-05-20 11:08:04.253830910 +0000
	I0520 11:14:44.076498   40362 command_runner.go:130] > Modify: 2024-05-20 11:08:04.253830910 +0000
	I0520 11:14:44.076506   40362 command_runner.go:130] > Change: 2024-05-20 11:08:04.253830910 +0000
	I0520 11:14:44.076519   40362 command_runner.go:130] >  Birth: 2024-05-20 11:08:04.253830910 +0000
	I0520 11:14:44.076559   40362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:14:44.082134   40362 command_runner.go:130] > Certificate will not expire
	I0520 11:14:44.082259   40362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:14:44.087568   40362 command_runner.go:130] > Certificate will not expire
	I0520 11:14:44.087695   40362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:14:44.093079   40362 command_runner.go:130] > Certificate will not expire
	I0520 11:14:44.093207   40362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:14:44.098322   40362 command_runner.go:130] > Certificate will not expire
	I0520 11:14:44.098446   40362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:14:44.103727   40362 command_runner.go:130] > Certificate will not expire
	I0520 11:14:44.103802   40362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:14:44.109103   40362 command_runner.go:130] > Certificate will not expire
	I0520 11:14:44.109213   40362 kubeadm.go:391] StartCluster: {Name:multinode-563238 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-563238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.2 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:14:44.109335   40362 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:14:44.109384   40362 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:14:44.145192   40362 command_runner.go:130] > 5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402
	I0520 11:14:44.145214   40362 command_runner.go:130] > 9649bc3306d6ea51630bb34ed9d37baddd52422170963d71a56cd26cdd3e04a2
	I0520 11:14:44.145231   40362 command_runner.go:130] > e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654
	I0520 11:14:44.145241   40362 command_runner.go:130] > 3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879
	I0520 11:14:44.145251   40362 command_runner.go:130] > 59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2
	I0520 11:14:44.145261   40362 command_runner.go:130] > d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c
	I0520 11:14:44.145270   40362 command_runner.go:130] > 28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0
	I0520 11:14:44.145277   40362 command_runner.go:130] > c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9
	I0520 11:14:44.146421   40362 cri.go:89] found id: "5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402"
	I0520 11:14:44.146436   40362 cri.go:89] found id: "9649bc3306d6ea51630bb34ed9d37baddd52422170963d71a56cd26cdd3e04a2"
	I0520 11:14:44.146442   40362 cri.go:89] found id: "e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654"
	I0520 11:14:44.146448   40362 cri.go:89] found id: "3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879"
	I0520 11:14:44.146453   40362 cri.go:89] found id: "59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2"
	I0520 11:14:44.146457   40362 cri.go:89] found id: "d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c"
	I0520 11:14:44.146460   40362 cri.go:89] found id: "28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0"
	I0520 11:14:44.146463   40362 cri.go:89] found id: "c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9"
	I0520 11:14:44.146465   40362 cri.go:89] found id: ""
	I0520 11:14:44.146501   40362 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.768770305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716203910768746314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd393b0b-5c94-42e9-80c6-2e38c9ffcac1 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.769227870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87a9862a-a2ca-46a0-9d3a-f3e8a28d3e42 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.769365512Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87a9862a-a2ca-46a0-9d3a-f3e8a28d3e42 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.769693034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27bca91f6d6816c9959c2c629ab65565e722b25dddf61844604d7098bd480aa0,PodSandboxId:10f73ac2ef4b9bdf6f60ce979ac0f611d48885ca6cc18d05b6c96c084ba01dd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716203724184038955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73c505331db817f002b06ffea8ee1f58448aacf73c532887d85f826a03485c,PodSandboxId:e8981df7ed0d267f17954e028200b7d00c9ac97d81c58c44bd0f10b3f74c472a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716203692290716775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397337b81f6ac7360b0b3ad1100eb29a9fa3356d7b0e278157270ea471949011,PodSandboxId:5f0f94851dba625251a0d9ff237088973640163ab34bb8b0572bf5ee5af8087a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716203692192838472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5a
df9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39aa15086d2d3e41c85d2d745833aa96e969f5e427873f658f886879d8f98195,PodSandboxId:8b9c34bf7b1ec47989a7ac1deafea0e59ae0a67e7852974e232dcd1834f3f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716203692072545085,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da-218ad474da19,},Annotations:map[string]
string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6a556f3371d804ff261e3f386ce16f04680c53afa96f5237f2f5576e8ed9a3,PodSandboxId:58cddfbaf13ab2bb21ef45d46dd93ae00c07387e375df0cd869fe6d810671f14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716203691986911632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.ku
bernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df66636af8e0435313062c7dcd67889a0f8843ed590fdfecd182d64ddeab1524,PodSandboxId:ffe9e8a28237fceb3584590c6030ec7d926bae35a8f15451e2a841e6f5949707,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716203686647008895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string{io.kubernetes.container.hash: 35b42501,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdd7e83e2d02167020ebbe5af1a67b7f788b7093c8cef267a78b7bfcb189d68,PodSandboxId:653f49e6e9517cba6b2d6923b8ef2607b46cea0b43a0eaf7463a54c86b30865d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716203686691432273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f840bd971547b956c8859cc8caf2a99d30b8a5a0cca4b22a581fc69a4a511631,PodSandboxId:a9fc672468d0aafd1c25e6cb7884e2e483aa6a20badd42deb968afc7a1236e12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716203686602887580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c7ac35dfae92cb27f8fd0ba826b1c7ef672880a5116e94273794063caa1f0,PodSandboxId:bf729013c521c3d3491bb35331a77a43f252529948bc6262da14887992ecc36f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716203686595885844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.container.hash: aaab6ac8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb869c60e3b08c929ead0eab0e8e6acf6e76bb5ed30cecc72e1745dfc1d7b840,PodSandboxId:866315da0199034276dff82a6c28af3d3a4708f2c743dbd5ee90d2f0ea5e22d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716203382299672670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402,PodSandboxId:5200ecfdd57944a83d31e85c4c052190a52e47a8eb59980607de60a1c80bbba2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716203339846982887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9649bc3306d6ea51630bb34ed9d37baddd52422170963d71a56cd26cdd3e04a2,PodSandboxId:799981de6efef87f15cf4dc78612363bb24f2b163f5bf125b046dc10a19323d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716203339780704507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654,PodSandboxId:eca11eca800adeb0983642ce0296006091be4e1dd4d6c98407febb988f47c19e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716203308227468897,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5adf9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879,PodSandboxId:5d8102a8a077616ff77e8787db352dad712c72091b6bb590d72e5cf1725568df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716203308092071705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da
-218ad474da19,},Annotations:map[string]string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2,PodSandboxId:a95aa0d2462886d9abb6c74ba6138aa26d75ea07dc869315aee42ed10bfc9fc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716203287848561777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string
{io.kubernetes.container.hash: 35b42501,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c,PodSandboxId:6cd079f4b2e872172db22658bca8cb4e922c06d6ddb0b5da3c758bba2e866290,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716203287787479863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.
container.hash: aaab6ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0,PodSandboxId:8417013f2e4984baaa842c0c2ccd710641bcaaa3c8c0bcc4cf75a9d529813d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716203287740754560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9,PodSandboxId:0b4e20bdb7345c40ccb72cb7ee63d5635a8b3f2b23d1c75f2590c9ab38b9bf91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716203287718944430,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87a9862a-a2ca-46a0-9d3a-f3e8a28d3e42 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.808510340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c025b836-c30a-4c76-8b13-eadbbee50ec4 name=/runtime.v1.RuntimeService/Version
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.808583690Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c025b836-c30a-4c76-8b13-eadbbee50ec4 name=/runtime.v1.RuntimeService/Version
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.809634969Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=657c8ae2-b788-4504-bddf-d5907abe08d3 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.809994601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716203910809975448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=657c8ae2-b788-4504-bddf-d5907abe08d3 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.810468609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9153d1d-1349-48e5-817d-b242f9a92fdd name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.810523284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9153d1d-1349-48e5-817d-b242f9a92fdd name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.810851581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27bca91f6d6816c9959c2c629ab65565e722b25dddf61844604d7098bd480aa0,PodSandboxId:10f73ac2ef4b9bdf6f60ce979ac0f611d48885ca6cc18d05b6c96c084ba01dd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716203724184038955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73c505331db817f002b06ffea8ee1f58448aacf73c532887d85f826a03485c,PodSandboxId:e8981df7ed0d267f17954e028200b7d00c9ac97d81c58c44bd0f10b3f74c472a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716203692290716775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397337b81f6ac7360b0b3ad1100eb29a9fa3356d7b0e278157270ea471949011,PodSandboxId:5f0f94851dba625251a0d9ff237088973640163ab34bb8b0572bf5ee5af8087a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716203692192838472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5a
df9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39aa15086d2d3e41c85d2d745833aa96e969f5e427873f658f886879d8f98195,PodSandboxId:8b9c34bf7b1ec47989a7ac1deafea0e59ae0a67e7852974e232dcd1834f3f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716203692072545085,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da-218ad474da19,},Annotations:map[string]
string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6a556f3371d804ff261e3f386ce16f04680c53afa96f5237f2f5576e8ed9a3,PodSandboxId:58cddfbaf13ab2bb21ef45d46dd93ae00c07387e375df0cd869fe6d810671f14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716203691986911632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.ku
bernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df66636af8e0435313062c7dcd67889a0f8843ed590fdfecd182d64ddeab1524,PodSandboxId:ffe9e8a28237fceb3584590c6030ec7d926bae35a8f15451e2a841e6f5949707,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716203686647008895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string{io.kubernetes.container.hash: 35b42501,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdd7e83e2d02167020ebbe5af1a67b7f788b7093c8cef267a78b7bfcb189d68,PodSandboxId:653f49e6e9517cba6b2d6923b8ef2607b46cea0b43a0eaf7463a54c86b30865d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716203686691432273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f840bd971547b956c8859cc8caf2a99d30b8a5a0cca4b22a581fc69a4a511631,PodSandboxId:a9fc672468d0aafd1c25e6cb7884e2e483aa6a20badd42deb968afc7a1236e12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716203686602887580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c7ac35dfae92cb27f8fd0ba826b1c7ef672880a5116e94273794063caa1f0,PodSandboxId:bf729013c521c3d3491bb35331a77a43f252529948bc6262da14887992ecc36f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716203686595885844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.container.hash: aaab6ac8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb869c60e3b08c929ead0eab0e8e6acf6e76bb5ed30cecc72e1745dfc1d7b840,PodSandboxId:866315da0199034276dff82a6c28af3d3a4708f2c743dbd5ee90d2f0ea5e22d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716203382299672670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402,PodSandboxId:5200ecfdd57944a83d31e85c4c052190a52e47a8eb59980607de60a1c80bbba2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716203339846982887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9649bc3306d6ea51630bb34ed9d37baddd52422170963d71a56cd26cdd3e04a2,PodSandboxId:799981de6efef87f15cf4dc78612363bb24f2b163f5bf125b046dc10a19323d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716203339780704507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654,PodSandboxId:eca11eca800adeb0983642ce0296006091be4e1dd4d6c98407febb988f47c19e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716203308227468897,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5adf9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879,PodSandboxId:5d8102a8a077616ff77e8787db352dad712c72091b6bb590d72e5cf1725568df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716203308092071705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da
-218ad474da19,},Annotations:map[string]string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2,PodSandboxId:a95aa0d2462886d9abb6c74ba6138aa26d75ea07dc869315aee42ed10bfc9fc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716203287848561777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string
{io.kubernetes.container.hash: 35b42501,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c,PodSandboxId:6cd079f4b2e872172db22658bca8cb4e922c06d6ddb0b5da3c758bba2e866290,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716203287787479863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.
container.hash: aaab6ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0,PodSandboxId:8417013f2e4984baaa842c0c2ccd710641bcaaa3c8c0bcc4cf75a9d529813d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716203287740754560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9,PodSandboxId:0b4e20bdb7345c40ccb72cb7ee63d5635a8b3f2b23d1c75f2590c9ab38b9bf91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716203287718944430,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9153d1d-1349-48e5-817d-b242f9a92fdd name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.851117213Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b60292a0-76a7-4136-b77f-06d0e002e51a name=/runtime.v1.RuntimeService/Version
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.851191344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b60292a0-76a7-4136-b77f-06d0e002e51a name=/runtime.v1.RuntimeService/Version
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.852171390Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab997ccc-59e1-4065-90b8-7fe8d1a87da7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.852789685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716203910852767347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab997ccc-59e1-4065-90b8-7fe8d1a87da7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.853242871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbdefe8e-1263-41b4-8420-ab55b2534423 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.853414688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbdefe8e-1263-41b4-8420-ab55b2534423 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.853774974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27bca91f6d6816c9959c2c629ab65565e722b25dddf61844604d7098bd480aa0,PodSandboxId:10f73ac2ef4b9bdf6f60ce979ac0f611d48885ca6cc18d05b6c96c084ba01dd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716203724184038955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73c505331db817f002b06ffea8ee1f58448aacf73c532887d85f826a03485c,PodSandboxId:e8981df7ed0d267f17954e028200b7d00c9ac97d81c58c44bd0f10b3f74c472a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716203692290716775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397337b81f6ac7360b0b3ad1100eb29a9fa3356d7b0e278157270ea471949011,PodSandboxId:5f0f94851dba625251a0d9ff237088973640163ab34bb8b0572bf5ee5af8087a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716203692192838472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5a
df9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39aa15086d2d3e41c85d2d745833aa96e969f5e427873f658f886879d8f98195,PodSandboxId:8b9c34bf7b1ec47989a7ac1deafea0e59ae0a67e7852974e232dcd1834f3f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716203692072545085,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da-218ad474da19,},Annotations:map[string]
string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6a556f3371d804ff261e3f386ce16f04680c53afa96f5237f2f5576e8ed9a3,PodSandboxId:58cddfbaf13ab2bb21ef45d46dd93ae00c07387e375df0cd869fe6d810671f14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716203691986911632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.ku
bernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df66636af8e0435313062c7dcd67889a0f8843ed590fdfecd182d64ddeab1524,PodSandboxId:ffe9e8a28237fceb3584590c6030ec7d926bae35a8f15451e2a841e6f5949707,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716203686647008895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string{io.kubernetes.container.hash: 35b42501,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdd7e83e2d02167020ebbe5af1a67b7f788b7093c8cef267a78b7bfcb189d68,PodSandboxId:653f49e6e9517cba6b2d6923b8ef2607b46cea0b43a0eaf7463a54c86b30865d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716203686691432273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f840bd971547b956c8859cc8caf2a99d30b8a5a0cca4b22a581fc69a4a511631,PodSandboxId:a9fc672468d0aafd1c25e6cb7884e2e483aa6a20badd42deb968afc7a1236e12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716203686602887580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c7ac35dfae92cb27f8fd0ba826b1c7ef672880a5116e94273794063caa1f0,PodSandboxId:bf729013c521c3d3491bb35331a77a43f252529948bc6262da14887992ecc36f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716203686595885844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.container.hash: aaab6ac8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb869c60e3b08c929ead0eab0e8e6acf6e76bb5ed30cecc72e1745dfc1d7b840,PodSandboxId:866315da0199034276dff82a6c28af3d3a4708f2c743dbd5ee90d2f0ea5e22d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716203382299672670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402,PodSandboxId:5200ecfdd57944a83d31e85c4c052190a52e47a8eb59980607de60a1c80bbba2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716203339846982887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9649bc3306d6ea51630bb34ed9d37baddd52422170963d71a56cd26cdd3e04a2,PodSandboxId:799981de6efef87f15cf4dc78612363bb24f2b163f5bf125b046dc10a19323d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716203339780704507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654,PodSandboxId:eca11eca800adeb0983642ce0296006091be4e1dd4d6c98407febb988f47c19e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716203308227468897,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5adf9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879,PodSandboxId:5d8102a8a077616ff77e8787db352dad712c72091b6bb590d72e5cf1725568df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716203308092071705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da
-218ad474da19,},Annotations:map[string]string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2,PodSandboxId:a95aa0d2462886d9abb6c74ba6138aa26d75ea07dc869315aee42ed10bfc9fc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716203287848561777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string
{io.kubernetes.container.hash: 35b42501,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c,PodSandboxId:6cd079f4b2e872172db22658bca8cb4e922c06d6ddb0b5da3c758bba2e866290,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716203287787479863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.
container.hash: aaab6ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0,PodSandboxId:8417013f2e4984baaa842c0c2ccd710641bcaaa3c8c0bcc4cf75a9d529813d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716203287740754560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9,PodSandboxId:0b4e20bdb7345c40ccb72cb7ee63d5635a8b3f2b23d1c75f2590c9ab38b9bf91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716203287718944430,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbdefe8e-1263-41b4-8420-ab55b2534423 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.892049349Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac6960aa-98dc-4b50-b810-f00045b10edc name=/runtime.v1.RuntimeService/Version
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.892119407Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac6960aa-98dc-4b50-b810-f00045b10edc name=/runtime.v1.RuntimeService/Version
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.894060219Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3249f87e-de31-416d-9557-df6ad63b0425 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.894527499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716203910894503575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133242,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3249f87e-de31-416d-9557-df6ad63b0425 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.895386828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=afa6f68a-d7b7-4b76-a9ad-6c7f84cdaff6 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.895463300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=afa6f68a-d7b7-4b76-a9ad-6c7f84cdaff6 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:18:30 multinode-563238 crio[2862]: time="2024-05-20 11:18:30.895778590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27bca91f6d6816c9959c2c629ab65565e722b25dddf61844604d7098bd480aa0,PodSandboxId:10f73ac2ef4b9bdf6f60ce979ac0f611d48885ca6cc18d05b6c96c084ba01dd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716203724184038955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c73c505331db817f002b06ffea8ee1f58448aacf73c532887d85f826a03485c,PodSandboxId:e8981df7ed0d267f17954e028200b7d00c9ac97d81c58c44bd0f10b3f74c472a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716203692290716775,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397337b81f6ac7360b0b3ad1100eb29a9fa3356d7b0e278157270ea471949011,PodSandboxId:5f0f94851dba625251a0d9ff237088973640163ab34bb8b0572bf5ee5af8087a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1716203692192838472,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5a
df9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39aa15086d2d3e41c85d2d745833aa96e969f5e427873f658f886879d8f98195,PodSandboxId:8b9c34bf7b1ec47989a7ac1deafea0e59ae0a67e7852974e232dcd1834f3f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716203692072545085,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da-218ad474da19,},Annotations:map[string]
string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6a556f3371d804ff261e3f386ce16f04680c53afa96f5237f2f5576e8ed9a3,PodSandboxId:58cddfbaf13ab2bb21ef45d46dd93ae00c07387e375df0cd869fe6d810671f14,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716203691986911632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.ku
bernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df66636af8e0435313062c7dcd67889a0f8843ed590fdfecd182d64ddeab1524,PodSandboxId:ffe9e8a28237fceb3584590c6030ec7d926bae35a8f15451e2a841e6f5949707,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716203686647008895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string{io.kubernetes.container.hash: 35b42501,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bdd7e83e2d02167020ebbe5af1a67b7f788b7093c8cef267a78b7bfcb189d68,PodSandboxId:653f49e6e9517cba6b2d6923b8ef2607b46cea0b43a0eaf7463a54c86b30865d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716203686691432273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f840bd971547b956c8859cc8caf2a99d30b8a5a0cca4b22a581fc69a4a511631,PodSandboxId:a9fc672468d0aafd1c25e6cb7884e2e483aa6a20badd42deb968afc7a1236e12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716203686602887580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22c7ac35dfae92cb27f8fd0ba826b1c7ef672880a5116e94273794063caa1f0,PodSandboxId:bf729013c521c3d3491bb35331a77a43f252529948bc6262da14887992ecc36f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716203686595885844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.container.hash: aaab6ac8,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb869c60e3b08c929ead0eab0e8e6acf6e76bb5ed30cecc72e1745dfc1d7b840,PodSandboxId:866315da0199034276dff82a6c28af3d3a4708f2c743dbd5ee90d2f0ea5e22d8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716203382299672670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-twghk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e876ac80-7dbd-4cab-9758-991bc4d1e72c,},Annotations:map[string]string{io.kubernetes.container.hash: b714146f,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402,PodSandboxId:5200ecfdd57944a83d31e85c4c052190a52e47a8eb59980607de60a1c80bbba2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716203339846982887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6pmxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccf30108-39a4-4d98-8f61-27d16c57ba3d,},Annotations:map[string]string{io.kubernetes.container.hash: 584d0e76,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9649bc3306d6ea51630bb34ed9d37baddd52422170963d71a56cd26cdd3e04a2,PodSandboxId:799981de6efef87f15cf4dc78612363bb24f2b163f5bf125b046dc10a19323d7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716203339780704507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: edef1751-3621-4446-b25e-bf6b4411c4ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac106028,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654,PodSandboxId:eca11eca800adeb0983642ce0296006091be4e1dd4d6c98407febb988f47c19e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1716203308227468897,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cwg9g,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: cc5ffb6e-96f3-49d6-97e9-efa5adf9f489,},Annotations:map[string]string{io.kubernetes.container.hash: caabe196,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879,PodSandboxId:5d8102a8a077616ff77e8787db352dad712c72091b6bb590d72e5cf1725568df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716203308092071705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8lrj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3c92bc-3be5-42a6-b8da
-218ad474da19,},Annotations:map[string]string{io.kubernetes.container.hash: 4e0eb064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2,PodSandboxId:a95aa0d2462886d9abb6c74ba6138aa26d75ea07dc869315aee42ed10bfc9fc9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716203287848561777,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24268bc0819a6fcfff83f1ef43367671,},Annotations:map[string]string
{io.kubernetes.container.hash: 35b42501,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c,PodSandboxId:6cd079f4b2e872172db22658bca8cb4e922c06d6ddb0b5da3c758bba2e866290,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716203287787479863,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293bacc81b7936a8b6686ddfaafb97fb,},Annotations:map[string]string{io.kubernetes.
container.hash: aaab6ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0,PodSandboxId:8417013f2e4984baaa842c0c2ccd710641bcaaa3c8c0bcc4cf75a9d529813d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716203287740754560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc77254ba6e5422e0a3a34fba549cb2,},Annotations:map[string]string{io
.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9,PodSandboxId:0b4e20bdb7345c40ccb72cb7ee63d5635a8b3f2b23d1c75f2590c9ab38b9bf91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716203287718944430,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-563238,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a673518bc90f63f243fa22d70be0fd3c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=afa6f68a-d7b7-4b76-a9ad-6c7f84cdaff6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	27bca91f6d681       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   10f73ac2ef4b9       busybox-fc5497c4f-twghk
	3c73c505331db       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   e8981df7ed0d2       coredns-7db6d8ff4d-6pmxg
	397337b81f6ac       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   5f0f94851dba6       kindnet-cwg9g
	39aa15086d2d3       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      3 minutes ago       Running             kube-proxy                1                   8b9c34bf7b1ec       kube-proxy-k8lrj
	2f6a556f3371d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   58cddfbaf13ab       storage-provisioner
	0bdd7e83e2d02       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      3 minutes ago       Running             kube-scheduler            1                   653f49e6e9517       kube-scheduler-multinode-563238
	df66636af8e04       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   ffe9e8a28237f       etcd-multinode-563238
	f840bd971547b       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      3 minutes ago       Running             kube-controller-manager   1                   a9fc672468d0a       kube-controller-manager-multinode-563238
	b22c7ac35dfae       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      3 minutes ago       Running             kube-apiserver            1                   bf729013c521c       kube-apiserver-multinode-563238
	eb869c60e3b08       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   866315da01990       busybox-fc5497c4f-twghk
	5d6855192c829       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   5200ecfdd5794       coredns-7db6d8ff4d-6pmxg
	9649bc3306d6e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   799981de6efef       storage-provisioner
	e46ea94884403       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      10 minutes ago      Exited              kindnet-cni               0                   eca11eca800ad       kindnet-cwg9g
	3db01e7963ed1       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      10 minutes ago      Exited              kube-proxy                0                   5d8102a8a0776       kube-proxy-k8lrj
	59a147db91ff7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   a95aa0d246288       etcd-multinode-563238
	d833ed2f82deb       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      10 minutes ago      Exited              kube-apiserver            0                   6cd079f4b2e87       kube-apiserver-multinode-563238
	28e64d121b441       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      10 minutes ago      Exited              kube-controller-manager   0                   8417013f2e498       kube-controller-manager-multinode-563238
	c1063cae37531       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      10 minutes ago      Exited              kube-scheduler            0                   0b4e20bdb7345       kube-scheduler-multinode-563238
	
	
	==> coredns [3c73c505331db817f002b06ffea8ee1f58448aacf73c532887d85f826a03485c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48727 - 31337 "HINFO IN 8412042946722979635.2088388643646220200. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015903138s
	
	
	==> coredns [5d6855192c829c99e01dd8b3b9df71c7fe18270a3787a84dcb85f7a05db1c402] <==
	[INFO] 10.244.0.3:51937 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001728176s
	[INFO] 10.244.0.3:56550 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061854s
	[INFO] 10.244.0.3:45509 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00003092s
	[INFO] 10.244.0.3:35501 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000934994s
	[INFO] 10.244.0.3:54862 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106239s
	[INFO] 10.244.0.3:39157 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046208s
	[INFO] 10.244.0.3:50994 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029828s
	[INFO] 10.244.1.2:44704 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144582s
	[INFO] 10.244.1.2:39127 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136518s
	[INFO] 10.244.1.2:59901 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125581s
	[INFO] 10.244.1.2:40565 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143893s
	[INFO] 10.244.0.3:40418 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193796s
	[INFO] 10.244.0.3:44311 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080595s
	[INFO] 10.244.0.3:50008 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092351s
	[INFO] 10.244.0.3:57131 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103833s
	[INFO] 10.244.1.2:35564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165214s
	[INFO] 10.244.1.2:45936 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000209405s
	[INFO] 10.244.1.2:38807 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111289s
	[INFO] 10.244.1.2:43094 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103549s
	[INFO] 10.244.0.3:50964 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077955s
	[INFO] 10.244.0.3:39356 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081605s
	[INFO] 10.244.0.3:60628 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060538s
	[INFO] 10.244.0.3:60679 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069045s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-563238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-563238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-563238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T11_08_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:08:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-563238
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:18:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:14:50 +0000   Mon, 20 May 2024 11:08:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:14:50 +0000   Mon, 20 May 2024 11:08:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:14:50 +0000   Mon, 20 May 2024 11:08:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:14:50 +0000   Mon, 20 May 2024 11:08:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    multinode-563238
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ce1ee2e473ae48a6881d18b833115ca9
	  System UUID:                ce1ee2e4-73ae-48a6-881d-18b833115ca9
	  Boot ID:                    a235f95d-3616-48cf-8c1e-5ae1c11ac3e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-twghk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m52s
	  kube-system                 coredns-7db6d8ff4d-6pmxg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-563238                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-cwg9g                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-563238             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-563238    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-k8lrj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-563238             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 3m38s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-563238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-563238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-563238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-563238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-563238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-563238 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-563238 event: Registered Node multinode-563238 in Controller
	  Normal  NodeReady                9m32s                  kubelet          Node multinode-563238 status is now: NodeReady
	  Normal  Starting                 3m46s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m45s (x8 over 3m46s)  kubelet          Node multinode-563238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m45s (x8 over 3m46s)  kubelet          Node multinode-563238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m45s (x7 over 3m46s)  kubelet          Node multinode-563238 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m28s                  node-controller  Node multinode-563238 event: Registered Node multinode-563238 in Controller
	
	
	Name:               multinode-563238-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-563238-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=multinode-563238
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_20T11_15_27_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:15:27 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-563238-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:16:08 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 20 May 2024 11:15:57 +0000   Mon, 20 May 2024 11:16:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 20 May 2024 11:15:57 +0000   Mon, 20 May 2024 11:16:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 20 May 2024 11:15:57 +0000   Mon, 20 May 2024 11:16:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 20 May 2024 11:15:57 +0000   Mon, 20 May 2024 11:16:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    multinode-563238-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b0875580e0ee4b0fb0a716be7e7602df
	  System UUID:                b0875580-e0ee-4b0f-b0a7-16be7e7602df
	  Boot ID:                    2164aa3c-32ef-472a-8942-ecf21157ea5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tc6cb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kindnet-r88l7              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m4s
	  kube-system                 kube-proxy-nqhw5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m59s                kube-proxy       
	  Normal  Starting                 8m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9m5s (x2 over 9m5s)  kubelet          Node multinode-563238-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m5s (x2 over 9m5s)  kubelet          Node multinode-563238-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m5s (x2 over 9m5s)  kubelet          Node multinode-563238-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m55s                kubelet          Node multinode-563238-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)  kubelet          Node multinode-563238-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)  kubelet          Node multinode-563238-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)  kubelet          Node multinode-563238-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m55s                kubelet          Node multinode-563238-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                 node-controller  Node multinode-563238-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.062540] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.191196] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.128374] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.260026] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[May20 11:08] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.491111] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.060565] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.992394] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.103522] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.107523] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.557773] systemd-fstab-generator[1490]: Ignoring "noauto" option for root device
	[ +32.236508] kauditd_printk_skb: 60 callbacks suppressed
	[May20 11:09] kauditd_printk_skb: 14 callbacks suppressed
	[May20 11:14] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.138502] systemd-fstab-generator[2790]: Ignoring "noauto" option for root device
	[  +0.164640] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +0.136027] systemd-fstab-generator[2816]: Ignoring "noauto" option for root device
	[  +0.289671] systemd-fstab-generator[2844]: Ignoring "noauto" option for root device
	[  +3.949188] systemd-fstab-generator[2945]: Ignoring "noauto" option for root device
	[  +2.104070] systemd-fstab-generator[3070]: Ignoring "noauto" option for root device
	[  +0.080874] kauditd_printk_skb: 122 callbacks suppressed
	[  +6.132797] kauditd_printk_skb: 52 callbacks suppressed
	[May20 11:15] systemd-fstab-generator[3882]: Ignoring "noauto" option for root device
	[  +0.029310] kauditd_printk_skb: 32 callbacks suppressed
	[ +20.902005] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [59a147db91ff7efdf571ae02cc9c461619d4ba8c314eee72acde94bf781a58a2] <==
	{"level":"info","ts":"2024-05-20T11:08:08.416671Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:08:08.416769Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-05-20T11:09:27.095868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.1027ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13477461232947984917 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:3b098f95aff67a14>","response":"size:40"}
	{"level":"info","ts":"2024-05-20T11:09:27.09673Z","caller":"traceutil/trace.go:171","msg":"trace[1071855207] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"197.350637ms","start":"2024-05-20T11:09:26.899337Z","end":"2024-05-20T11:09:27.096688Z","steps":["trace[1071855207] 'process raft request'  (duration: 197.258542ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:09:27.096805Z","caller":"traceutil/trace.go:171","msg":"trace[2047829939] linearizableReadLoop","detail":"{readStateIndex:505; appliedIndex:504; }","duration":"200.97191ms","start":"2024-05-20T11:09:26.89582Z","end":"2024-05-20T11:09:27.096792Z","steps":["trace[2047829939] 'read index received'  (duration: 61.428µs)","trace[2047829939] 'applied index is now lower than readState.Index'  (duration: 200.909528ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T11:09:27.096979Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.121126ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-563238-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-05-20T11:09:27.098532Z","caller":"traceutil/trace.go:171","msg":"trace[82515920] range","detail":"{range_begin:/registry/minions/multinode-563238-m02; range_end:; response_count:1; response_revision:480; }","duration":"202.725786ms","start":"2024-05-20T11:09:26.895787Z","end":"2024-05-20T11:09:27.098513Z","steps":["trace[82515920] 'agreement among raft nodes before linearized reading'  (duration: 201.058645ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:10:15.225615Z","caller":"traceutil/trace.go:171","msg":"trace[1062220281] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"157.41348ms","start":"2024-05-20T11:10:15.068184Z","end":"2024-05-20T11:10:15.225598Z","steps":["trace[1062220281] 'process raft request'  (duration: 157.363292ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T11:10:15.225781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.177395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T11:10:15.225839Z","caller":"traceutil/trace.go:171","msg":"trace[2128692745] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:609; }","duration":"184.272541ms","start":"2024-05-20T11:10:15.041556Z","end":"2024-05-20T11:10:15.225828Z","steps":["trace[2128692745] 'agreement among raft nodes before linearized reading'  (duration: 184.155971ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:10:15.225664Z","caller":"traceutil/trace.go:171","msg":"trace[714839611] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"188.489094ms","start":"2024-05-20T11:10:15.037167Z","end":"2024-05-20T11:10:15.225656Z","steps":["trace[714839611] 'process raft request'  (duration: 187.043525ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:10:15.225697Z","caller":"traceutil/trace.go:171","msg":"trace[113598982] linearizableReadLoop","detail":"{readStateIndex:650; appliedIndex:649; }","duration":"161.567537ms","start":"2024-05-20T11:10:15.064122Z","end":"2024-05-20T11:10:15.22569Z","steps":["trace[113598982] 'read index received'  (duration: 160.09678ms)","trace[113598982] 'applied index is now lower than readState.Index'  (duration: 1.46993ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T11:10:20.141813Z","caller":"traceutil/trace.go:171","msg":"trace[321737632] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"125.121409ms","start":"2024-05-20T11:10:20.016666Z","end":"2024-05-20T11:10:20.141788Z","steps":["trace[321737632] 'process raft request'  (duration: 124.797511ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:10:22.455112Z","caller":"traceutil/trace.go:171","msg":"trace[1157728450] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"102.133927ms","start":"2024-05-20T11:10:22.352965Z","end":"2024-05-20T11:10:22.455099Z","steps":["trace[1157728450] 'process raft request'  (duration: 100.248801ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:13:07.577671Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-20T11:13:07.577829Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-563238","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	{"level":"warn","ts":"2024-05-20T11:13:07.577925Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T11:13:07.578005Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T11:13:07.601061Z","caller":"v3rpc/watch.go:473","msg":"failed to send watch response to gRPC stream","error":"rpc error: code = Unavailable desc = transport is closing"}
	{"level":"warn","ts":"2024-05-20T11:13:07.635612Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T11:13:07.635686Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T11:13:07.635801Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"44b3a0f32f80bb09","current-leader-member-id":"44b3a0f32f80bb09"}
	{"level":"info","ts":"2024-05-20T11:13:07.640802Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-05-20T11:13:07.640874Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-05-20T11:13:07.6409Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-563238","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	
	
	==> etcd [df66636af8e0435313062c7dcd67889a0f8843ed590fdfecd182d64ddeab1524] <==
	{"level":"info","ts":"2024-05-20T11:14:47.282722Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T11:14:47.282749Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T11:14:47.292541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 switched to configuration voters=(4950477381744769801)"}
	{"level":"info","ts":"2024-05-20T11:14:47.293423Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","added-peer-id":"44b3a0f32f80bb09","added-peer-peer-urls":["https://192.168.39.145:2380"]}
	{"level":"info","ts":"2024-05-20T11:14:47.293597Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:14:47.294727Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:14:47.311054Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T11:14:47.319617Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"44b3a0f32f80bb09","initial-advertise-peer-urls":["https://192.168.39.145:2380"],"listen-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.145:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T11:14:47.325306Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T11:14:47.314529Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-05-20T11:14:47.330373Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2024-05-20T11:14:48.512159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T11:14:48.512222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T11:14:48.512335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 44b3a0f32f80bb09 at term 2"}
	{"level":"info","ts":"2024-05-20T11:14:48.512355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T11:14:48.512362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 44b3a0f32f80bb09 at term 3"}
	{"level":"info","ts":"2024-05-20T11:14:48.512373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became leader at term 3"}
	{"level":"info","ts":"2024-05-20T11:14:48.512387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 44b3a0f32f80bb09 at term 3"}
	{"level":"info","ts":"2024-05-20T11:14:48.518088Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"44b3a0f32f80bb09","local-member-attributes":"{Name:multinode-563238 ClientURLs:[https://192.168.39.145:2379]}","request-path":"/0/members/44b3a0f32f80bb09/attributes","cluster-id":"33ee9922f2bf4379","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T11:14:48.51815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:14:48.518921Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:14:48.519116Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T11:14:48.519162Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T11:14:48.52192Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T11:14:48.521928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.145:2379"}
	
	
	==> kernel <==
	 11:18:31 up 10 min,  0 users,  load average: 0.20, 0.33, 0.19
	Linux multinode-563238 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [397337b81f6ac7360b0b3ad1100eb29a9fa3356d7b0e278157270ea471949011] <==
	I0520 11:17:23.164805       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:17:33.169625       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:17:33.169704       1 main.go:227] handling current node
	I0520 11:17:33.169728       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:17:33.169746       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:17:43.181140       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:17:43.181259       1 main.go:227] handling current node
	I0520 11:17:43.181348       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:17:43.181371       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:17:53.189485       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:17:53.189597       1 main.go:227] handling current node
	I0520 11:17:53.189627       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:17:53.189646       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:18:03.202720       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:18:03.202809       1 main.go:227] handling current node
	I0520 11:18:03.202833       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:18:03.202852       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:18:13.208133       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:18:13.208158       1 main.go:227] handling current node
	I0520 11:18:13.208169       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:18:13.208175       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:18:23.212686       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:18:23.212777       1 main.go:227] handling current node
	I0520 11:18:23.212804       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:18:23.212822       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e46ea94884403233271dfb03211bcd78b4e5250dce395d2dfe91f3e19c867654] <==
	I0520 11:12:19.158032       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	I0520 11:12:29.167056       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:12:29.167212       1 main.go:227] handling current node
	I0520 11:12:29.167252       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:12:29.167343       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:12:29.167473       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0520 11:12:29.167496       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	I0520 11:12:39.178379       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:12:39.178551       1 main.go:227] handling current node
	I0520 11:12:39.178592       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:12:39.178613       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:12:39.178718       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0520 11:12:39.178753       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	I0520 11:12:49.184059       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:12:49.184098       1 main.go:227] handling current node
	I0520 11:12:49.184107       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:12:49.184112       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:12:49.184215       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0520 11:12:49.184241       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	I0520 11:12:59.194893       1 main.go:223] Handling node with IPs: map[192.168.39.145:{}]
	I0520 11:12:59.194932       1 main.go:227] handling current node
	I0520 11:12:59.194942       1 main.go:223] Handling node with IPs: map[192.168.39.198:{}]
	I0520 11:12:59.194948       1 main.go:250] Node multinode-563238-m02 has CIDR [10.244.1.0/24] 
	I0520 11:12:59.195061       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0520 11:12:59.195094       1 main.go:250] Node multinode-563238-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b22c7ac35dfae92cb27f8fd0ba826b1c7ef672880a5116e94273794063caa1f0] <==
	I0520 11:14:49.857904       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0520 11:14:49.892841       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 11:14:49.894490       1 aggregator.go:165] initial CRD sync complete...
	I0520 11:14:49.894522       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 11:14:49.894528       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 11:14:49.937883       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 11:14:49.937973       1 policy_source.go:224] refreshing policies
	I0520 11:14:49.943622       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 11:14:50.005779       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 11:14:50.005902       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 11:14:50.006566       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 11:14:50.006635       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 11:14:50.005452       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 11:14:50.007036       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 11:14:50.008762       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 11:14:50.011960       1 cache.go:39] Caches are synced for autoregister controller
	I0520 11:14:50.012827       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 11:14:50.791905       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 11:14:51.441648       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 11:14:51.550393       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 11:14:51.567705       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 11:14:51.640448       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 11:14:51.649486       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 11:15:03.144216       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 11:15:03.183969       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [d833ed2f82deb507021352a4916e1a348b619571467d575234d71c193e9b214c] <==
	E0520 11:09:45.024585       1 conn.go:339] Error on socket receive: read tcp 192.168.39.145:8443->192.168.39.1:48348: use of closed network connection
	E0520 11:09:45.180746       1 conn.go:339] Error on socket receive: read tcp 192.168.39.145:8443->192.168.39.1:48364: use of closed network connection
	E0520 11:09:45.350838       1 conn.go:339] Error on socket receive: read tcp 192.168.39.145:8443->192.168.39.1:48372: use of closed network connection
	I0520 11:13:07.559547       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0520 11:13:07.589747       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.589836       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.589874       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.590255       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.595911       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0520 11:13:07.596074       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	E0520 11:13:07.596110       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596157       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596189       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596721       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596763       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596796       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596827       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596856       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596885       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596913       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0520 11:13:07.596942       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0520 11:13:07.598054       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0520 11:13:07.598510       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0520 11:13:07.599229       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0520 11:13:07.600130       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [28e64d121b441b21778c17b0ade6771b5fa5ff965678712fec30dca74e7c94c0] <==
	I0520 11:09:27.159699       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-563238-m02"
	I0520 11:09:27.167521       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-563238-m02" podCIDRs=["10.244.1.0/24"]
	I0520 11:09:36.779874       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:09:39.219378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.728613ms"
	I0520 11:09:39.238052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.369467ms"
	I0520 11:09:39.238444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.67µs"
	I0520 11:09:39.242553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.436µs"
	I0520 11:09:39.250372       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.642µs"
	I0520 11:09:42.904614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.88734ms"
	I0520 11:09:42.904799       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.788µs"
	I0520 11:09:43.326008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.305064ms"
	I0520 11:09:43.326088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.878µs"
	I0520 11:10:15.228486       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-563238-m03\" does not exist"
	I0520 11:10:15.228956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:10:15.245744       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-563238-m03" podCIDRs=["10.244.2.0/24"]
	I0520 11:10:17.182514       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-563238-m03"
	I0520 11:10:23.988501       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:10:52.074193       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:10:53.020914       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:10:53.022972       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-563238-m03\" does not exist"
	I0520 11:10:53.027543       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-563238-m03" podCIDRs=["10.244.3.0/24"]
	I0520 11:11:01.903730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:11:42.236381       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m03"
	I0520 11:11:42.272161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.427285ms"
	I0520 11:11:42.272632       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.58µs"
	
	
	==> kube-controller-manager [f840bd971547b956c8859cc8caf2a99d30b8a5a0cca4b22a581fc69a4a511631] <==
	I0520 11:15:27.436601       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-563238-m02\" does not exist"
	I0520 11:15:27.459890       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-563238-m02" podCIDRs=["10.244.1.0/24"]
	I0520 11:15:29.359188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.462µs"
	I0520 11:15:29.427749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.059µs"
	I0520 11:15:29.439963       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.903µs"
	I0520 11:15:29.444701       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.927µs"
	I0520 11:15:29.448765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.483µs"
	I0520 11:15:32.770152       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.129µs"
	I0520 11:15:36.726971       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:15:36.748926       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.93µs"
	I0520 11:15:36.763557       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.705µs"
	I0520 11:15:39.611619       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.06113ms"
	I0520 11:15:39.611990       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.901µs"
	I0520 11:15:55.180118       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:15:56.243599       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:15:56.244692       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-563238-m03\" does not exist"
	I0520 11:15:56.257884       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-563238-m03" podCIDRs=["10.244.2.0/24"]
	I0520 11:16:04.456720       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:16:09.832097       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-563238-m02"
	I0520 11:16:48.265065       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.962902ms"
	I0520 11:16:48.265444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.917µs"
	I0520 11:17:23.120924       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5zgms"
	I0520 11:17:23.146460       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5zgms"
	I0520 11:17:23.146549       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-vq6fj"
	I0520 11:17:23.173019       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-vq6fj"
	
	
	==> kube-proxy [39aa15086d2d3e41c85d2d745833aa96e969f5e427873f658f886879d8f98195] <==
	I0520 11:14:52.415466       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:14:52.454743       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
	I0520 11:14:52.511770       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:14:52.512364       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:14:52.512470       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:14:52.517510       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:14:52.517746       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:14:52.517778       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:14:52.518890       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:14:52.520392       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:14:52.520992       1 config.go:192] "Starting service config controller"
	I0520 11:14:52.521063       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:14:52.521142       1 config.go:319] "Starting node config controller"
	I0520 11:14:52.521183       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:14:52.620826       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 11:14:52.621895       1 shared_informer.go:320] Caches are synced for node config
	I0520 11:14:52.621911       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [3db01e7963ed108e7ea67520d4f437baef8211569a063b2df3bf33fdd538a879] <==
	I0520 11:08:28.635012       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:08:28.653163       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
	I0520 11:08:28.696062       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:08:28.696157       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:08:28.696186       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:08:28.700159       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:08:28.700486       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:08:28.700518       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:08:28.701807       1 config.go:192] "Starting service config controller"
	I0520 11:08:28.701855       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:08:28.701883       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:08:28.701887       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:08:28.703686       1 config.go:319] "Starting node config controller"
	I0520 11:08:28.703719       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:08:28.802943       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 11:08:28.803071       1 shared_informer.go:320] Caches are synced for service config
	I0520 11:08:28.804550       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0bdd7e83e2d02167020ebbe5af1a67b7f788b7093c8cef267a78b7bfcb189d68] <==
	I0520 11:14:47.809936       1 serving.go:380] Generated self-signed cert in-memory
	W0520 11:14:49.868808       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 11:14:49.868883       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 11:14:49.868913       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 11:14:49.868941       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 11:14:49.902994       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 11:14:49.905344       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:14:49.909733       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 11:14:49.909925       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 11:14:49.909968       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:14:49.910050       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 11:14:50.010970       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c1063cae37531247f5168d4735e86f089af7a7fbadaba97319d0f41b685edbb9] <==
	E0520 11:08:10.676462       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 11:08:10.675383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 11:08:10.676501       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0520 11:08:10.675557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:08:10.676537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:08:11.557260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:08:11.557373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:08:11.619656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:08:11.619772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 11:08:11.753514       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 11:08:11.753631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 11:08:11.821659       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 11:08:11.822800       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 11:08:11.822588       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 11:08:11.822895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 11:08:11.894544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 11:08:11.894589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 11:08:11.908059       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:08:11.909024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 11:08:11.908584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:08:11.909163       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 11:08:11.912474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 11:08:11.913823       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0520 11:08:12.266335       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0520 11:13:07.566145       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.045439    3077 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.045480    3077 projected.go:200] Error preparing data for projected volume kube-api-access-njlhd for pod kube-system/coredns-7db6d8ff4d-6pmxg: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.045526    3077 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccf30108-39a4-4d98-8f61-27d16c57ba3d-kube-api-access-njlhd podName:ccf30108-39a4-4d98-8f61-27d16c57ba3d nodeName:}" failed. No retries permitted until 2024-05-20 11:14:51.545512173 +0000 UTC m=+5.763742482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-njlhd" (UniqueName: "kubernetes.io/projected/ccf30108-39a4-4d98-8f61-27d16c57ba3d-kube-api-access-njlhd") pod "coredns-7db6d8ff4d-6pmxg" (UID: "ccf30108-39a4-4d98-8f61-27d16c57ba3d") : failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.053673    3077 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.053715    3077 projected.go:200] Error preparing data for projected volume kube-api-access-vfxp7 for pod kube-system/kindnet-cwg9g: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.053756    3077 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc5ffb6e-96f3-49d6-97e9-efa5adf9f489-kube-api-access-vfxp7 podName:cc5ffb6e-96f3-49d6-97e9-efa5adf9f489 nodeName:}" failed. No retries permitted until 2024-05-20 11:14:51.553744003 +0000 UTC m=+5.771974312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vfxp7" (UniqueName: "kubernetes.io/projected/cc5ffb6e-96f3-49d6-97e9-efa5adf9f489-kube-api-access-vfxp7") pod "kindnet-cwg9g" (UID: "cc5ffb6e-96f3-49d6-97e9-efa5adf9f489") : failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.053777    3077 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.053784    3077 projected.go:200] Error preparing data for projected volume kube-api-access-xtpvw for pod kube-system/kube-proxy-k8lrj: failed to sync configmap cache: timed out waiting for the condition
	May 20 11:14:51 multinode-563238 kubelet[3077]: E0520 11:14:51.053804    3077 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5b3c92bc-3be5-42a6-b8da-218ad474da19-kube-api-access-xtpvw podName:5b3c92bc-3be5-42a6-b8da-218ad474da19 nodeName:}" failed. No retries permitted until 2024-05-20 11:14:51.553798548 +0000 UTC m=+5.772028857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xtpvw" (UniqueName: "kubernetes.io/projected/5b3c92bc-3be5-42a6-b8da-218ad474da19-kube-api-access-xtpvw") pod "kube-proxy-k8lrj" (UID: "5b3c92bc-3be5-42a6-b8da-218ad474da19") : failed to sync configmap cache: timed out waiting for the condition
	May 20 11:15:00 multinode-563238 kubelet[3077]: I0520 11:15:00.033562    3077 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 20 11:15:45 multinode-563238 kubelet[3077]: E0520 11:15:45.969096    3077 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:15:45 multinode-563238 kubelet[3077]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:15:45 multinode-563238 kubelet[3077]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:15:45 multinode-563238 kubelet[3077]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:15:45 multinode-563238 kubelet[3077]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:16:45 multinode-563238 kubelet[3077]: E0520 11:16:45.962813    3077 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:16:45 multinode-563238 kubelet[3077]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:16:45 multinode-563238 kubelet[3077]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:16:45 multinode-563238 kubelet[3077]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:16:45 multinode-563238 kubelet[3077]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:17:45 multinode-563238 kubelet[3077]: E0520 11:17:45.965385    3077 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:17:45 multinode-563238 kubelet[3077]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:17:45 multinode-563238 kubelet[3077]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:17:45 multinode-563238 kubelet[3077]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:17:45 multinode-563238 kubelet[3077]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:18:30.483438   42703 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18925-3819/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-563238 -n multinode-563238
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-563238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.35s)

                                                
                                    
x
+
TestPreload (350.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-086998 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0520 11:22:19.191691   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 11:24:37.358461   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 11:24:54.309801   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-086998 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m27.680296354s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-086998 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-086998 image pull gcr.io/k8s-minikube/busybox: (2.810466695s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-086998
E0520 11:27:19.192679   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-086998: exit status 82 (2m0.460002715s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-086998"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-086998 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-05-20 11:27:47.0632017 +0000 UTC m=+4036.775084539
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-086998 -n test-preload-086998
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-086998 -n test-preload-086998: exit status 3 (18.585031781s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:28:05.645280   45931 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	E0520 11:28:05.645303   45931 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-086998" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-086998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-086998
--- FAIL: TestPreload (350.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (417.53s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-854547 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-854547 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m31.491010894s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-854547] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-854547" primary control-plane node in "kubernetes-upgrade-854547" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:30:00.208855   47034 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:30:00.208989   47034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:30:00.209000   47034 out.go:304] Setting ErrFile to fd 2...
	I0520 11:30:00.209005   47034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:30:00.209287   47034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:30:00.209909   47034 out.go:298] Setting JSON to false
	I0520 11:30:00.210773   47034 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4343,"bootTime":1716200257,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:30:00.210825   47034 start.go:139] virtualization: kvm guest
	I0520 11:30:00.213067   47034 out.go:177] * [kubernetes-upgrade-854547] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:30:00.214578   47034 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:30:00.215917   47034 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:30:00.214567   47034 notify.go:220] Checking for updates...
	I0520 11:30:00.218745   47034 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:30:00.221153   47034 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:30:00.222285   47034 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:30:00.223667   47034 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:30:00.225297   47034 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:30:00.258882   47034 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 11:30:00.259981   47034 start.go:297] selected driver: kvm2
	I0520 11:30:00.260001   47034 start.go:901] validating driver "kvm2" against <nil>
	I0520 11:30:00.260016   47034 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:30:00.260754   47034 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:30:00.260832   47034 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:30:00.274873   47034 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:30:00.274934   47034 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 11:30:00.275155   47034 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 11:30:00.275181   47034 cni.go:84] Creating CNI manager for ""
	I0520 11:30:00.275192   47034 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:30:00.275203   47034 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 11:30:00.275294   47034 start.go:340] cluster config:
	{Name:kubernetes-upgrade-854547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-854547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:30:00.275406   47034 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:30:00.276859   47034 out.go:177] * Starting "kubernetes-upgrade-854547" primary control-plane node in "kubernetes-upgrade-854547" cluster
	I0520 11:30:00.277899   47034 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:30:00.277931   47034 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0520 11:30:00.277947   47034 cache.go:56] Caching tarball of preloaded images
	I0520 11:30:00.278027   47034 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:30:00.278040   47034 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0520 11:30:00.278426   47034 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/config.json ...
	I0520 11:30:00.278451   47034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/config.json: {Name:mk28a2961b0013f3d16044aae8b67ddabf4db2f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:30:00.278596   47034 start.go:360] acquireMachinesLock for kubernetes-upgrade-854547: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:30:00.278637   47034 start.go:364] duration metric: took 20.686µs to acquireMachinesLock for "kubernetes-upgrade-854547"
	I0520 11:30:00.278655   47034 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-854547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-854547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:30:00.278737   47034 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 11:30:00.280150   47034 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 11:30:00.280290   47034 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:30:00.280327   47034 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:30:00.294389   47034 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39381
	I0520 11:30:00.294746   47034 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:30:00.295205   47034 main.go:141] libmachine: Using API Version  1
	I0520 11:30:00.295226   47034 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:30:00.295528   47034 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:30:00.295679   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetMachineName
	I0520 11:30:00.295814   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .DriverName
	I0520 11:30:00.295927   47034 start.go:159] libmachine.API.Create for "kubernetes-upgrade-854547" (driver="kvm2")
	I0520 11:30:00.295953   47034 client.go:168] LocalClient.Create starting
	I0520 11:30:00.295979   47034 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 11:30:00.296013   47034 main.go:141] libmachine: Decoding PEM data...
	I0520 11:30:00.296032   47034 main.go:141] libmachine: Parsing certificate...
	I0520 11:30:00.296189   47034 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 11:30:00.296223   47034 main.go:141] libmachine: Decoding PEM data...
	I0520 11:30:00.296243   47034 main.go:141] libmachine: Parsing certificate...
	I0520 11:30:00.296290   47034 main.go:141] libmachine: Running pre-create checks...
	I0520 11:30:00.296302   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .PreCreateCheck
	I0520 11:30:00.296673   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetConfigRaw
	I0520 11:30:00.297091   47034 main.go:141] libmachine: Creating machine...
	I0520 11:30:00.297109   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .Create
	I0520 11:30:00.297246   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Creating KVM machine...
	I0520 11:30:00.298344   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found existing default KVM network
	I0520 11:30:00.299062   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:00.298935   47073 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d960}
	I0520 11:30:00.299106   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | created network xml: 
	I0520 11:30:00.299129   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | <network>
	I0520 11:30:00.299141   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG |   <name>mk-kubernetes-upgrade-854547</name>
	I0520 11:30:00.299157   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG |   <dns enable='no'/>
	I0520 11:30:00.299166   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG |   
	I0520 11:30:00.299172   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 11:30:00.299181   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG |     <dhcp>
	I0520 11:30:00.299187   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 11:30:00.299195   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG |     </dhcp>
	I0520 11:30:00.299200   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG |   </ip>
	I0520 11:30:00.299208   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG |   
	I0520 11:30:00.299228   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | </network>
	I0520 11:30:00.299243   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | 
	I0520 11:30:00.303825   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | trying to create private KVM network mk-kubernetes-upgrade-854547 192.168.39.0/24...
	I0520 11:30:00.368322   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | private KVM network mk-kubernetes-upgrade-854547 192.168.39.0/24 created
	I0520 11:30:00.368349   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547 ...
	I0520 11:30:00.368377   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:00.368297   47073 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:30:00.368397   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 11:30:00.368452   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 11:30:00.592096   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:00.591972   47073 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547/id_rsa...
	I0520 11:30:00.858778   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:00.858620   47073 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547/kubernetes-upgrade-854547.rawdisk...
	I0520 11:30:00.858815   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Writing magic tar header
	I0520 11:30:00.858842   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Writing SSH key tar header
	I0520 11:30:00.858859   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:00.858775   47073 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547 ...
	I0520 11:30:00.858879   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547
	I0520 11:30:00.858945   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547 (perms=drwx------)
	I0520 11:30:00.858963   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 11:30:00.858971   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 11:30:00.858977   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:30:00.858987   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 11:30:00.859007   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 11:30:00.859017   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 11:30:00.859025   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 11:30:00.859035   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Creating domain...
	I0520 11:30:00.859055   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 11:30:00.859072   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 11:30:00.859089   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Checking permissions on dir: /home/jenkins
	I0520 11:30:00.859098   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Checking permissions on dir: /home
	I0520 11:30:00.859111   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Skipping /home - not owner
	I0520 11:30:00.860245   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) define libvirt domain using xml: 
	I0520 11:30:00.860259   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) <domain type='kvm'>
	I0520 11:30:00.860267   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)   <name>kubernetes-upgrade-854547</name>
	I0520 11:30:00.860282   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)   <memory unit='MiB'>2200</memory>
	I0520 11:30:00.860291   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)   <vcpu>2</vcpu>
	I0520 11:30:00.860303   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)   <features>
	I0520 11:30:00.860311   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <acpi/>
	I0520 11:30:00.860318   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <apic/>
	I0520 11:30:00.860324   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <pae/>
	I0520 11:30:00.860340   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     
	I0520 11:30:00.860348   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)   </features>
	I0520 11:30:00.860354   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)   <cpu mode='host-passthrough'>
	I0520 11:30:00.860361   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)   
	I0520 11:30:00.860366   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)   </cpu>
	I0520 11:30:00.860377   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)   <os>
	I0520 11:30:00.860388   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <type>hvm</type>
	I0520 11:30:00.860397   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <boot dev='cdrom'/>
	I0520 11:30:00.860408   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <boot dev='hd'/>
	I0520 11:30:00.860416   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <bootmenu enable='no'/>
	I0520 11:30:00.860423   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)   </os>
	I0520 11:30:00.860448   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)   <devices>
	I0520 11:30:00.860468   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <disk type='file' device='cdrom'>
	I0520 11:30:00.860487   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547/boot2docker.iso'/>
	I0520 11:30:00.860498   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <target dev='hdc' bus='scsi'/>
	I0520 11:30:00.860505   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <readonly/>
	I0520 11:30:00.860514   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     </disk>
	I0520 11:30:00.860531   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <disk type='file' device='disk'>
	I0520 11:30:00.860545   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 11:30:00.860596   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547/kubernetes-upgrade-854547.rawdisk'/>
	I0520 11:30:00.860630   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <target dev='hda' bus='virtio'/>
	I0520 11:30:00.860645   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     </disk>
	I0520 11:30:00.860657   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <interface type='network'>
	I0520 11:30:00.860669   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <source network='mk-kubernetes-upgrade-854547'/>
	I0520 11:30:00.860682   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <model type='virtio'/>
	I0520 11:30:00.860693   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     </interface>
	I0520 11:30:00.860702   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <interface type='network'>
	I0520 11:30:00.860707   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <source network='default'/>
	I0520 11:30:00.860714   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <model type='virtio'/>
	I0520 11:30:00.860720   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     </interface>
	I0520 11:30:00.860727   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <serial type='pty'>
	I0520 11:30:00.860733   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <target port='0'/>
	I0520 11:30:00.860740   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     </serial>
	I0520 11:30:00.860745   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <console type='pty'>
	I0520 11:30:00.860752   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <target type='serial' port='0'/>
	I0520 11:30:00.860767   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     </console>
	I0520 11:30:00.860785   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     <rng model='virtio'>
	I0520 11:30:00.860795   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)       <backend model='random'>/dev/random</backend>
	I0520 11:30:00.860802   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     </rng>
	I0520 11:30:00.860817   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     
	I0520 11:30:00.860825   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)     
	I0520 11:30:00.860831   47034 main.go:141] libmachine: (kubernetes-upgrade-854547)   </devices>
	I0520 11:30:00.860835   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) </domain>
	I0520 11:30:00.860842   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) 
	I0520 11:30:00.864914   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:b4:4c:a7 in network default
	I0520 11:30:00.865452   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Ensuring networks are active...
	I0520 11:30:00.865476   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:00.866132   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Ensuring network default is active
	I0520 11:30:00.866383   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Ensuring network mk-kubernetes-upgrade-854547 is active
	I0520 11:30:00.866838   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Getting domain xml...
	I0520 11:30:00.867485   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Creating domain...
	I0520 11:30:02.146205   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Waiting to get IP...
	I0520 11:30:02.147307   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:02.147718   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:02.147751   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:02.147672   47073 retry.go:31] will retry after 241.18276ms: waiting for machine to come up
	I0520 11:30:02.390066   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:02.390592   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:02.390616   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:02.390536   47073 retry.go:31] will retry after 326.45479ms: waiting for machine to come up
	I0520 11:30:02.719372   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:02.719896   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:02.719935   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:02.719833   47073 retry.go:31] will retry after 316.931246ms: waiting for machine to come up
	I0520 11:30:03.038511   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:03.038939   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:03.038962   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:03.038896   47073 retry.go:31] will retry after 491.242292ms: waiting for machine to come up
	I0520 11:30:03.531438   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:03.531845   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:03.531898   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:03.531841   47073 retry.go:31] will retry after 487.96509ms: waiting for machine to come up
	I0520 11:30:04.021380   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:04.021865   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:04.021887   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:04.021833   47073 retry.go:31] will retry after 879.621698ms: waiting for machine to come up
	I0520 11:30:04.903039   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:04.903487   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:04.903522   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:04.903435   47073 retry.go:31] will retry after 820.571146ms: waiting for machine to come up
	I0520 11:30:05.725908   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:05.726329   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:05.726357   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:05.726281   47073 retry.go:31] will retry after 1.331147203s: waiting for machine to come up
	I0520 11:30:07.059651   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:07.060147   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:07.060189   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:07.060093   47073 retry.go:31] will retry after 1.170789934s: waiting for machine to come up
	I0520 11:30:08.232373   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:08.232770   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:08.232794   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:08.232708   47073 retry.go:31] will retry after 2.080161424s: waiting for machine to come up
	I0520 11:30:10.314521   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:10.314900   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:10.314927   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:10.314876   47073 retry.go:31] will retry after 2.90416652s: waiting for machine to come up
	I0520 11:30:13.222371   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:13.222885   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:13.222918   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:13.222820   47073 retry.go:31] will retry after 2.325418771s: waiting for machine to come up
	I0520 11:30:15.549443   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:15.549794   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:15.549824   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:15.549725   47073 retry.go:31] will retry after 3.269166689s: waiting for machine to come up
	I0520 11:30:18.823073   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:18.823400   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find current IP address of domain kubernetes-upgrade-854547 in network mk-kubernetes-upgrade-854547
	I0520 11:30:18.823427   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | I0520 11:30:18.823353   47073 retry.go:31] will retry after 5.457652071s: waiting for machine to come up
	I0520 11:30:24.282166   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.282584   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Found IP for machine: 192.168.39.87
	I0520 11:30:24.282605   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Reserving static IP address...
	I0520 11:30:24.282620   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has current primary IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.282958   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-854547", mac: "52:54:00:60:ec:cf", ip: "192.168.39.87"} in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.354920   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Reserved static IP address: 192.168.39.87
	I0520 11:30:24.354947   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Getting to WaitForSSH function...
	I0520 11:30:24.354957   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Waiting for SSH to be available...
	I0520 11:30:24.357401   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.357942   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:minikube Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:24.357973   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.358511   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Using SSH client type: external
	I0520 11:30:24.358542   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547/id_rsa (-rw-------)
	I0520 11:30:24.358572   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:30:24.358587   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | About to run SSH command:
	I0520 11:30:24.358608   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | exit 0
	I0520 11:30:24.480759   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | SSH cmd err, output: <nil>: 
	I0520 11:30:24.481039   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) KVM machine creation complete!
	I0520 11:30:24.481311   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetConfigRaw
	I0520 11:30:24.481827   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .DriverName
	I0520 11:30:24.482025   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .DriverName
	I0520 11:30:24.482145   47034 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 11:30:24.482160   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetState
	I0520 11:30:24.483541   47034 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 11:30:24.483557   47034 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 11:30:24.483565   47034 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 11:30:24.483587   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:30:24.486215   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.486620   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:24.486647   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.486736   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHPort
	I0520 11:30:24.486909   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:24.487057   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:24.487191   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHUsername
	I0520 11:30:24.487370   47034 main.go:141] libmachine: Using SSH client type: native
	I0520 11:30:24.487582   47034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0520 11:30:24.487606   47034 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 11:30:24.584138   47034 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:30:24.584164   47034 main.go:141] libmachine: Detecting the provisioner...
	I0520 11:30:24.584175   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:30:24.586940   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.587278   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:24.587308   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.587449   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHPort
	I0520 11:30:24.587621   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:24.587747   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:24.587879   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHUsername
	I0520 11:30:24.588018   47034 main.go:141] libmachine: Using SSH client type: native
	I0520 11:30:24.588312   47034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0520 11:30:24.588337   47034 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 11:30:24.686703   47034 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 11:30:24.686794   47034 main.go:141] libmachine: found compatible host: buildroot
	I0520 11:30:24.686807   47034 main.go:141] libmachine: Provisioning with buildroot...
	I0520 11:30:24.686818   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetMachineName
	I0520 11:30:24.687075   47034 buildroot.go:166] provisioning hostname "kubernetes-upgrade-854547"
	I0520 11:30:24.687109   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetMachineName
	I0520 11:30:24.687284   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:30:24.690021   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.690406   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:24.690448   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.690546   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHPort
	I0520 11:30:24.690719   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:24.691110   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:24.691262   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHUsername
	I0520 11:30:24.691437   47034 main.go:141] libmachine: Using SSH client type: native
	I0520 11:30:24.691608   47034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0520 11:30:24.691620   47034 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-854547 && echo "kubernetes-upgrade-854547" | sudo tee /etc/hostname
	I0520 11:30:24.814389   47034 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-854547
	
	I0520 11:30:24.814415   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:30:24.817484   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.817833   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:24.817868   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.818007   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHPort
	I0520 11:30:24.818197   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:24.818383   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:24.818498   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHUsername
	I0520 11:30:24.818666   47034 main.go:141] libmachine: Using SSH client type: native
	I0520 11:30:24.818820   47034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0520 11:30:24.818837   47034 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-854547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-854547/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-854547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:30:24.933827   47034 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:30:24.933860   47034 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:30:24.933893   47034 buildroot.go:174] setting up certificates
	I0520 11:30:24.933905   47034 provision.go:84] configureAuth start
	I0520 11:30:24.933917   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetMachineName
	I0520 11:30:24.934309   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetIP
	I0520 11:30:24.937148   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.937482   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:24.937511   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.937634   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:30:24.939971   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.940295   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:24.940321   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:24.940436   47034 provision.go:143] copyHostCerts
	I0520 11:30:24.940496   47034 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:30:24.940514   47034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:30:24.940595   47034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:30:24.940717   47034 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:30:24.940730   47034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:30:24.940793   47034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:30:24.940897   47034 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:30:24.940908   47034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:30:24.940960   47034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:30:24.941042   47034 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-854547 san=[127.0.0.1 192.168.39.87 kubernetes-upgrade-854547 localhost minikube]
	I0520 11:30:25.079412   47034 provision.go:177] copyRemoteCerts
	I0520 11:30:25.079462   47034 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:30:25.079484   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:30:25.082320   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.082626   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:25.082670   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.082802   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHPort
	I0520 11:30:25.082997   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:25.083166   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHUsername
	I0520 11:30:25.083305   47034 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547/id_rsa Username:docker}
	I0520 11:30:25.163070   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:30:25.187592   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0520 11:30:25.211193   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:30:25.236007   47034 provision.go:87] duration metric: took 302.090422ms to configureAuth
	I0520 11:30:25.236048   47034 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:30:25.236254   47034 config.go:182] Loaded profile config "kubernetes-upgrade-854547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:30:25.236349   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:30:25.238965   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.239337   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:25.239377   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.239544   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHPort
	I0520 11:30:25.239744   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:25.240016   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:25.240179   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHUsername
	I0520 11:30:25.240380   47034 main.go:141] libmachine: Using SSH client type: native
	I0520 11:30:25.240593   47034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0520 11:30:25.240614   47034 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:30:25.504516   47034 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:30:25.504544   47034 main.go:141] libmachine: Checking connection to Docker...
	I0520 11:30:25.504556   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetURL
	I0520 11:30:25.505866   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Using libvirt version 6000000
	I0520 11:30:25.508594   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.509080   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:25.509120   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.509323   47034 main.go:141] libmachine: Docker is up and running!
	I0520 11:30:25.509338   47034 main.go:141] libmachine: Reticulating splines...
	I0520 11:30:25.509344   47034 client.go:171] duration metric: took 25.213385096s to LocalClient.Create
	I0520 11:30:25.509362   47034 start.go:167] duration metric: took 25.213436514s to libmachine.API.Create "kubernetes-upgrade-854547"
	I0520 11:30:25.509369   47034 start.go:293] postStartSetup for "kubernetes-upgrade-854547" (driver="kvm2")
	I0520 11:30:25.509378   47034 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:30:25.509392   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .DriverName
	I0520 11:30:25.509650   47034 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:30:25.509674   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:30:25.512156   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.512531   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:25.512564   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.512686   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHPort
	I0520 11:30:25.512869   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:25.513014   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHUsername
	I0520 11:30:25.513177   47034 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547/id_rsa Username:docker}
	I0520 11:30:25.591769   47034 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:30:25.596135   47034 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:30:25.596157   47034 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:30:25.596216   47034 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:30:25.596318   47034 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:30:25.596430   47034 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:30:25.605424   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:30:25.629155   47034 start.go:296] duration metric: took 119.774737ms for postStartSetup
	I0520 11:30:25.629203   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetConfigRaw
	I0520 11:30:25.629729   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetIP
	I0520 11:30:25.632396   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.632765   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:25.632793   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.633087   47034 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/config.json ...
	I0520 11:30:25.633253   47034 start.go:128] duration metric: took 25.354507033s to createHost
	I0520 11:30:25.633273   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:30:25.635261   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.635580   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:25.635611   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.635727   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHPort
	I0520 11:30:25.635908   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:25.636068   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:25.636198   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHUsername
	I0520 11:30:25.636327   47034 main.go:141] libmachine: Using SSH client type: native
	I0520 11:30:25.636494   47034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0520 11:30:25.636507   47034 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 11:30:25.733592   47034 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716204625.711576219
	
	I0520 11:30:25.733613   47034 fix.go:216] guest clock: 1716204625.711576219
	I0520 11:30:25.733619   47034 fix.go:229] Guest: 2024-05-20 11:30:25.711576219 +0000 UTC Remote: 2024-05-20 11:30:25.633263216 +0000 UTC m=+25.458270594 (delta=78.313003ms)
	I0520 11:30:25.733637   47034 fix.go:200] guest clock delta is within tolerance: 78.313003ms
	I0520 11:30:25.733641   47034 start.go:83] releasing machines lock for "kubernetes-upgrade-854547", held for 25.454995008s
	I0520 11:30:25.733665   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .DriverName
	I0520 11:30:25.733968   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetIP
	I0520 11:30:25.736667   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.737104   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:25.737136   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.737281   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .DriverName
	I0520 11:30:25.737740   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .DriverName
	I0520 11:30:25.737933   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .DriverName
	I0520 11:30:25.738028   47034 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:30:25.738083   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:30:25.738137   47034 ssh_runner.go:195] Run: cat /version.json
	I0520 11:30:25.738159   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:30:25.740842   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.741062   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.741183   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:25.741211   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.741311   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:25.741337   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHPort
	I0520 11:30:25.741338   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:25.741503   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:25.741554   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHPort
	I0520 11:30:25.741747   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:30:25.741768   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHUsername
	I0520 11:30:25.741920   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHUsername
	I0520 11:30:25.741911   47034 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547/id_rsa Username:docker}
	I0520 11:30:25.742067   47034 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547/id_rsa Username:docker}
	W0520 11:30:25.818910   47034 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:30:25.818993   47034 ssh_runner.go:195] Run: systemctl --version
	I0520 11:30:25.846373   47034 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:30:26.009583   47034 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:30:26.017056   47034 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:30:26.017141   47034 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:30:26.034446   47034 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:30:26.034470   47034 start.go:494] detecting cgroup driver to use...
	I0520 11:30:26.034554   47034 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:30:26.050427   47034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:30:26.064320   47034 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:30:26.064377   47034 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:30:26.079017   47034 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:30:26.093266   47034 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:30:26.210830   47034 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:30:26.384792   47034 docker.go:233] disabling docker service ...
	I0520 11:30:26.384869   47034 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:30:26.401362   47034 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:30:26.414924   47034 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:30:26.536379   47034 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:30:26.656515   47034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:30:26.671480   47034 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:30:26.689935   47034 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 11:30:26.689988   47034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:30:26.701199   47034 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:30:26.701259   47034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:30:26.711894   47034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:30:26.722484   47034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:30:26.732904   47034 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:30:26.743670   47034 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:30:26.753114   47034 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:30:26.753154   47034 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:30:26.765726   47034 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:30:26.775169   47034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:30:26.887648   47034 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:30:27.039814   47034 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:30:27.039884   47034 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:30:27.046674   47034 start.go:562] Will wait 60s for crictl version
	I0520 11:30:27.046725   47034 ssh_runner.go:195] Run: which crictl
	I0520 11:30:27.051518   47034 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:30:27.096246   47034 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:30:27.096334   47034 ssh_runner.go:195] Run: crio --version
	I0520 11:30:27.124879   47034 ssh_runner.go:195] Run: crio --version
	I0520 11:30:27.160276   47034 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 11:30:27.161700   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetIP
	I0520 11:30:27.164731   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:27.165140   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:30:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:30:27.165169   47034 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:30:27.165402   47034 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 11:30:27.170278   47034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:30:27.184415   47034 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-854547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-854547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:30:27.184543   47034 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:30:27.184599   47034 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:30:27.223116   47034 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:30:27.223178   47034 ssh_runner.go:195] Run: which lz4
	I0520 11:30:27.227417   47034 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 11:30:27.231937   47034 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:30:27.231969   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 11:30:28.987646   47034 crio.go:462] duration metric: took 1.760264267s to copy over tarball
	I0520 11:30:28.987722   47034 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:30:31.532876   47034 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.545118269s)
	I0520 11:30:31.532909   47034 crio.go:469] duration metric: took 2.545238004s to extract the tarball
	I0520 11:30:31.532918   47034 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:30:31.574925   47034 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:30:31.623814   47034 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:30:31.623838   47034 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:30:31.623906   47034 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:30:31.623931   47034 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 11:30:31.623943   47034 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:30:31.623966   47034 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:30:31.623970   47034 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:30:31.623906   47034 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:30:31.624023   47034 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 11:30:31.624030   47034 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:30:31.625626   47034 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:30:31.625646   47034 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:30:31.625681   47034 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:30:31.625627   47034 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 11:30:31.625732   47034 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:30:31.625627   47034 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:30:31.625627   47034 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:30:31.625636   47034 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 11:30:31.789283   47034 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 11:30:31.802791   47034 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:30:31.838175   47034 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 11:30:31.838220   47034 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:30:31.838269   47034 ssh_runner.go:195] Run: which crictl
	I0520 11:30:31.849188   47034 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 11:30:31.849331   47034 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 11:30:31.849369   47034 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:30:31.849431   47034 ssh_runner.go:195] Run: which crictl
	I0520 11:30:31.851423   47034 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 11:30:31.871952   47034 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:30:31.891502   47034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 11:30:31.891579   47034 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:30:31.908234   47034 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 11:30:31.908276   47034 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 11:30:31.908319   47034 ssh_runner.go:195] Run: which crictl
	I0520 11:30:31.949656   47034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 11:30:31.949689   47034 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 11:30:31.949717   47034 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 11:30:31.949733   47034 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:30:31.949775   47034 ssh_runner.go:195] Run: which crictl
	I0520 11:30:31.983277   47034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 11:30:31.983307   47034 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:30:32.017433   47034 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:30:32.022759   47034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 11:30:32.023293   47034 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 11:30:32.042666   47034 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:30:32.081828   47034 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 11:30:32.081873   47034 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:30:32.081898   47034 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 11:30:32.081918   47034 ssh_runner.go:195] Run: which crictl
	I0520 11:30:32.081927   47034 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 11:30:32.081962   47034 ssh_runner.go:195] Run: which crictl
	I0520 11:30:32.105453   47034 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:30:32.105470   47034 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 11:30:32.105480   47034 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 11:30:32.105498   47034 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:30:32.105536   47034 ssh_runner.go:195] Run: which crictl
	I0520 11:30:32.153787   47034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 11:30:32.153880   47034 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:30:32.153910   47034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 11:30:32.191276   47034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 11:30:32.513114   47034 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:30:32.654140   47034 cache_images.go:92] duration metric: took 1.030283922s to LoadCachedImages
	W0520 11:30:32.654262   47034 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0520 11:30:32.654285   47034 kubeadm.go:928] updating node { 192.168.39.87 8443 v1.20.0 crio true true} ...
	I0520 11:30:32.654402   47034 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-854547 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-854547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:30:32.654490   47034 ssh_runner.go:195] Run: crio config
	I0520 11:30:32.709062   47034 cni.go:84] Creating CNI manager for ""
	I0520 11:30:32.709083   47034 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:30:32.709104   47034 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:30:32.709129   47034 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-854547 NodeName:kubernetes-upgrade-854547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:30:32.709281   47034 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-854547"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:30:32.709355   47034 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:30:32.720219   47034 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:30:32.720293   47034 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:30:32.729962   47034 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0520 11:30:32.746856   47034 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:30:32.763866   47034 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0520 11:30:32.781246   47034 ssh_runner.go:195] Run: grep 192.168.39.87	control-plane.minikube.internal$ /etc/hosts
	I0520 11:30:32.785168   47034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:30:32.799760   47034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:30:32.921207   47034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:30:32.939289   47034 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547 for IP: 192.168.39.87
	I0520 11:30:32.939312   47034 certs.go:194] generating shared ca certs ...
	I0520 11:30:32.939333   47034 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:30:32.939486   47034 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:30:32.939528   47034 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:30:32.939537   47034 certs.go:256] generating profile certs ...
	I0520 11:30:32.939585   47034 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/client.key
	I0520 11:30:32.939597   47034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/client.crt with IP's: []
	I0520 11:30:33.087340   47034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/client.crt ...
	I0520 11:30:33.087370   47034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/client.crt: {Name:mk9a437c870d49a46c1ef1a533fb80965476d1a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:30:33.087556   47034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/client.key ...
	I0520 11:30:33.087585   47034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/client.key: {Name:mkb0245c9d2ce8dc7dcd65511a8ef724d3820993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:30:33.087710   47034 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/apiserver.key.14694498
	I0520 11:30:33.087729   47034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/apiserver.crt.14694498 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.87]
	I0520 11:30:33.496389   47034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/apiserver.crt.14694498 ...
	I0520 11:30:33.496421   47034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/apiserver.crt.14694498: {Name:mk0801d0d3de5fbd41252779fdbcffcf589438d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:30:33.496564   47034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/apiserver.key.14694498 ...
	I0520 11:30:33.496581   47034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/apiserver.key.14694498: {Name:mk52b4ae0dc428c4e92b5f3600af8059684e43c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:30:33.496646   47034 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/apiserver.crt.14694498 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/apiserver.crt
	I0520 11:30:33.496715   47034 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/apiserver.key.14694498 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/apiserver.key
	I0520 11:30:33.496766   47034 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/proxy-client.key
	I0520 11:30:33.496780   47034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/proxy-client.crt with IP's: []
	I0520 11:30:33.644305   47034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/proxy-client.crt ...
	I0520 11:30:33.644334   47034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/proxy-client.crt: {Name:mk33f27a43d70a0fc975da9f072ab4eaef6e48fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:30:33.644479   47034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/proxy-client.key ...
	I0520 11:30:33.644491   47034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/proxy-client.key: {Name:mkdcfd17d0a05c4307be8540f23875ebf2639940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:30:33.644652   47034 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:30:33.644686   47034 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:30:33.644695   47034 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:30:33.644718   47034 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:30:33.644740   47034 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:30:33.644762   47034 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:30:33.644797   47034 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:30:33.645413   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:30:33.672694   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:30:33.697195   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:30:33.726869   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:30:33.754210   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0520 11:30:33.801282   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 11:30:33.829578   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:30:33.853273   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:30:33.881226   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:30:33.904659   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:30:33.928021   47034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:30:33.951639   47034 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:30:33.970493   47034 ssh_runner.go:195] Run: openssl version
	I0520 11:30:33.976630   47034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:30:33.989164   47034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:30:33.994181   47034 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:30:33.994235   47034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:30:34.000255   47034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:30:34.011594   47034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:30:34.024748   47034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:30:34.030273   47034 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:30:34.030331   47034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:30:34.036351   47034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:30:34.049153   47034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:30:34.061989   47034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:30:34.066787   47034 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:30:34.066837   47034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:30:34.073019   47034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:30:34.084638   47034 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:30:34.088976   47034 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 11:30:34.089039   47034 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-854547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-854547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:30:34.089139   47034 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:30:34.089184   47034 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:30:34.131927   47034 cri.go:89] found id: ""
	I0520 11:30:34.132015   47034 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 11:30:34.142831   47034 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:30:34.153296   47034 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:30:34.163586   47034 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:30:34.163609   47034 kubeadm.go:156] found existing configuration files:
	
	I0520 11:30:34.163653   47034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:30:34.173460   47034 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:30:34.173527   47034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:30:34.183924   47034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:30:34.193628   47034 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:30:34.193708   47034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:30:34.203810   47034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:30:34.213822   47034 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:30:34.213887   47034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:30:34.223819   47034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:30:34.233504   47034 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:30:34.233570   47034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:30:34.243585   47034 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:30:34.356301   47034 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:30:34.356354   47034 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:30:34.516468   47034 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:30:34.516635   47034 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:30:34.516769   47034 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:30:34.748499   47034 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:30:34.786800   47034 out.go:204]   - Generating certificates and keys ...
	I0520 11:30:34.786935   47034 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:30:34.787038   47034 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:30:35.003782   47034 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 11:30:35.068595   47034 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 11:30:35.216329   47034 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 11:30:35.290469   47034 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 11:30:35.646203   47034 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 11:30:35.646435   47034 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-854547 localhost] and IPs [192.168.39.87 127.0.0.1 ::1]
	I0520 11:30:35.772674   47034 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 11:30:35.773076   47034 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-854547 localhost] and IPs [192.168.39.87 127.0.0.1 ::1]
	I0520 11:30:35.914611   47034 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 11:30:36.007962   47034 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 11:30:36.134618   47034 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 11:30:36.134790   47034 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:30:36.294193   47034 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:30:36.477865   47034 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:30:36.606710   47034 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:30:36.709442   47034 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:30:36.726717   47034 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:30:36.727966   47034 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:30:36.728057   47034 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:30:36.856585   47034 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:30:36.858609   47034 out.go:204]   - Booting up control plane ...
	I0520 11:30:36.858720   47034 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:30:36.863753   47034 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:30:36.864735   47034 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:30:36.865677   47034 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:30:36.870187   47034 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:31:16.863194   47034 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:31:16.863623   47034 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:31:16.863922   47034 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:31:21.864394   47034 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:31:21.864681   47034 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:31:31.863993   47034 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:31:31.864252   47034 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:31:51.863784   47034 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:31:51.863966   47034 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:32:31.865108   47034 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:32:31.865369   47034 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:32:31.865386   47034 kubeadm.go:309] 
	I0520 11:32:31.865475   47034 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:32:31.865545   47034 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:32:31.865553   47034 kubeadm.go:309] 
	I0520 11:32:31.865600   47034 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:32:31.865654   47034 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:32:31.865785   47034 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:32:31.865798   47034 kubeadm.go:309] 
	I0520 11:32:31.865928   47034 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:32:31.865972   47034 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:32:31.866011   47034 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:32:31.866020   47034 kubeadm.go:309] 
	I0520 11:32:31.866150   47034 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:32:31.866252   47034 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:32:31.866264   47034 kubeadm.go:309] 
	I0520 11:32:31.866385   47034 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:32:31.866536   47034 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:32:31.866641   47034 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:32:31.866757   47034 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:32:31.866785   47034 kubeadm.go:309] 
	I0520 11:32:31.868017   47034 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:32:31.868151   47034 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:32:31.868236   47034 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0520 11:32:31.868410   47034 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-854547 localhost] and IPs [192.168.39.87 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-854547 localhost] and IPs [192.168.39.87 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-854547 localhost] and IPs [192.168.39.87 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-854547 localhost] and IPs [192.168.39.87 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0520 11:32:31.868468   47034 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:32:34.087189   47034 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.218684983s)
	I0520 11:32:34.087268   47034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:32:34.107436   47034 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:32:34.123367   47034 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:32:34.123392   47034 kubeadm.go:156] found existing configuration files:
	
	I0520 11:32:34.123447   47034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:32:34.136583   47034 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:32:34.136655   47034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:32:34.151324   47034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:32:34.165748   47034 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:32:34.165811   47034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:32:34.181771   47034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:32:34.198090   47034 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:32:34.198160   47034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:32:34.214322   47034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:32:34.226937   47034 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:32:34.227004   47034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:32:34.241725   47034 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:32:34.376457   47034 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:32:34.376678   47034 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:32:34.593066   47034 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:32:34.593203   47034 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:32:34.593324   47034 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:32:34.844558   47034 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:32:34.846523   47034 out.go:204]   - Generating certificates and keys ...
	I0520 11:32:34.846627   47034 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:32:34.846705   47034 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:32:34.846802   47034 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:32:34.846880   47034 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:32:34.847126   47034 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:32:34.847226   47034 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:32:34.847650   47034 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:32:34.848291   47034 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:32:34.848506   47034 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:32:34.849154   47034 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:32:34.849224   47034 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:32:34.849292   47034 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:32:35.021748   47034 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:32:35.115457   47034 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:32:35.399995   47034 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:32:35.788343   47034 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:32:35.807648   47034 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:32:35.813406   47034 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:32:35.813509   47034 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:32:35.996092   47034 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:32:35.997320   47034 out.go:204]   - Booting up control plane ...
	I0520 11:32:35.997452   47034 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:32:36.018487   47034 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:32:36.021790   47034 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:32:36.023675   47034 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:32:36.029656   47034 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:33:16.034596   47034 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:33:16.034712   47034 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:33:16.034901   47034 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:33:21.033287   47034 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:33:21.033482   47034 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:33:31.033848   47034 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:33:31.034128   47034 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:33:51.033578   47034 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:33:51.033832   47034 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:34:31.033515   47034 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:34:31.033778   47034 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:34:31.033794   47034 kubeadm.go:309] 
	I0520 11:34:31.033845   47034 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:34:31.033901   47034 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:34:31.033919   47034 kubeadm.go:309] 
	I0520 11:34:31.033971   47034 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:34:31.034055   47034 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:34:31.034208   47034 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:34:31.034224   47034 kubeadm.go:309] 
	I0520 11:34:31.034402   47034 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:34:31.034444   47034 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:34:31.034495   47034 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:34:31.034502   47034 kubeadm.go:309] 
	I0520 11:34:31.034619   47034 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:34:31.034728   47034 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:34:31.034743   47034 kubeadm.go:309] 
	I0520 11:34:31.034905   47034 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:34:31.035032   47034 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:34:31.035150   47034 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:34:31.035269   47034 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:34:31.035284   47034 kubeadm.go:309] 
	I0520 11:34:31.036105   47034 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:34:31.036210   47034 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:34:31.036316   47034 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 11:34:31.036403   47034 kubeadm.go:393] duration metric: took 3m56.947366487s to StartCluster
	I0520 11:34:31.036450   47034 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:34:31.036514   47034 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:34:31.086544   47034 cri.go:89] found id: ""
	I0520 11:34:31.086572   47034 logs.go:276] 0 containers: []
	W0520 11:34:31.086582   47034 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:34:31.086589   47034 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:34:31.086649   47034 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:34:31.123246   47034 cri.go:89] found id: ""
	I0520 11:34:31.123276   47034 logs.go:276] 0 containers: []
	W0520 11:34:31.123286   47034 logs.go:278] No container was found matching "etcd"
	I0520 11:34:31.123293   47034 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:34:31.123362   47034 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:34:31.160029   47034 cri.go:89] found id: ""
	I0520 11:34:31.160055   47034 logs.go:276] 0 containers: []
	W0520 11:34:31.160063   47034 logs.go:278] No container was found matching "coredns"
	I0520 11:34:31.160068   47034 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:34:31.160126   47034 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:34:31.196579   47034 cri.go:89] found id: ""
	I0520 11:34:31.196607   47034 logs.go:276] 0 containers: []
	W0520 11:34:31.196618   47034 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:34:31.196626   47034 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:34:31.196687   47034 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:34:31.237668   47034 cri.go:89] found id: ""
	I0520 11:34:31.237690   47034 logs.go:276] 0 containers: []
	W0520 11:34:31.237698   47034 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:34:31.237703   47034 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:34:31.237750   47034 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:34:31.279711   47034 cri.go:89] found id: ""
	I0520 11:34:31.279736   47034 logs.go:276] 0 containers: []
	W0520 11:34:31.279743   47034 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:34:31.279748   47034 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:34:31.279800   47034 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:34:31.319303   47034 cri.go:89] found id: ""
	I0520 11:34:31.319325   47034 logs.go:276] 0 containers: []
	W0520 11:34:31.319333   47034 logs.go:278] No container was found matching "kindnet"
	I0520 11:34:31.319342   47034 logs.go:123] Gathering logs for kubelet ...
	I0520 11:34:31.319353   47034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:34:31.370002   47034 logs.go:123] Gathering logs for dmesg ...
	I0520 11:34:31.370041   47034 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:34:31.384416   47034 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:34:31.384444   47034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:34:31.503000   47034 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:34:31.503021   47034 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:34:31.503033   47034 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:34:31.602403   47034 logs.go:123] Gathering logs for container status ...
	I0520 11:34:31.602438   47034 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0520 11:34:31.651098   47034 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0520 11:34:31.651140   47034 out.go:239] * 
	* 
	W0520 11:34:31.651209   47034 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:34:31.651231   47034 out.go:239] * 
	* 
	W0520 11:34:31.652014   47034 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:34:31.654797   47034 out.go:177] 
	W0520 11:34:31.656145   47034 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:34:31.656205   47034 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0520 11:34:31.656242   47034 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0520 11:34:31.657688   47034 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-854547 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-854547
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-854547: (6.288079524s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-854547 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-854547 status --format={{.Host}}: exit status 7 (62.137757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-854547 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0520 11:34:54.309353   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-854547 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.859142199s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-854547 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-854547 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-854547 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (81.424293ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-854547] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-854547
	    minikube start -p kubernetes-upgrade-854547 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8545472 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-854547 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-854547 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-854547 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.245419079s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-05-20 11:36:54.307934194 +0000 UTC m=+4584.019817033
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-854547 -n kubernetes-upgrade-854547
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-854547 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-854547 logs -n 25: (1.651083994s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-893050 sudo crio            | cilium-893050             | jenkins | v1.33.1 | 20 May 24 11:33 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-893050                      | cilium-893050             | jenkins | v1.33.1 | 20 May 24 11:33 UTC | 20 May 24 11:33 UTC |
	| start   | -p force-systemd-env-735967           | force-systemd-env-735967  | jenkins | v1.33.1 | 20 May 24 11:33 UTC | 20 May 24 11:34 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-388098                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:33 UTC | 20 May 24 11:33 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-388098                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:33 UTC | 20 May 24 11:33 UTC |
	| start   | -p NoKubernetes-388098                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:33 UTC | 20 May 24 11:34 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-735967           | force-systemd-env-735967  | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:34 UTC |
	| start   | -p force-systemd-flag-199580          | force-systemd-flag-199580 | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:35 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-388098 sudo           | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:34 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-388098                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:34 UTC |
	| start   | -p NoKubernetes-388098                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:35 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-854547          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:34 UTC |
	| start   | -p kubernetes-upgrade-854547          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:35 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-199580 ssh cat     | force-systemd-flag-199580 | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:35 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-199580          | force-systemd-flag-199580 | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:35 UTC |
	| start   | -p cert-expiration-355991             | cert-expiration-355991    | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:36 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-388098 sudo           | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:35 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-388098                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:35 UTC |
	| start   | -p cert-options-477107                | cert-options-477107       | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:36 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-854547          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:35 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-854547          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:36 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-477107 ssh               | cert-options-477107       | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-477107 -- sudo        | cert-options-477107       | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-477107                | cert-options-477107       | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	| start   | -p old-k8s-version-210420             | old-k8s-version-210420    | jenkins | v1.33.1 | 20 May 24 11:36 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:36:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:36:34.924585   54732 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:36:34.924695   54732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:36:34.924703   54732 out.go:304] Setting ErrFile to fd 2...
	I0520 11:36:34.924708   54732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:36:34.924867   54732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:36:34.925474   54732 out.go:298] Setting JSON to false
	I0520 11:36:34.926335   54732 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4738,"bootTime":1716200257,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:36:34.926390   54732 start.go:139] virtualization: kvm guest
	I0520 11:36:34.928768   54732 out.go:177] * [old-k8s-version-210420] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:36:34.930252   54732 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:36:34.930263   54732 notify.go:220] Checking for updates...
	I0520 11:36:34.931679   54732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:36:34.933173   54732 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:36:34.934463   54732 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:36:34.935908   54732 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:36:34.937216   54732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:36:34.939062   54732 config.go:182] Loaded profile config "cert-expiration-355991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:36:34.939195   54732 config.go:182] Loaded profile config "kubernetes-upgrade-854547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:36:34.939359   54732 config.go:182] Loaded profile config "pause-120159": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:36:34.939456   54732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:36:34.977941   54732 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 11:36:34.979409   54732 start.go:297] selected driver: kvm2
	I0520 11:36:34.979437   54732 start.go:901] validating driver "kvm2" against <nil>
	I0520 11:36:34.979450   54732 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:36:34.980387   54732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:34.980479   54732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:36:34.995908   54732 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:36:34.995989   54732 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 11:36:34.996261   54732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:36:34.996298   54732 cni.go:84] Creating CNI manager for ""
	I0520 11:36:34.996308   54732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:36:34.996321   54732 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 11:36:34.996392   54732 start.go:340] cluster config:
	{Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:36:34.996529   54732 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:34.998465   54732 out.go:177] * Starting "old-k8s-version-210420" primary control-plane node in "old-k8s-version-210420" cluster
	I0520 11:36:34.999693   54732 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:36:34.999727   54732 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0520 11:36:34.999733   54732 cache.go:56] Caching tarball of preloaded images
	I0520 11:36:34.999804   54732 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:36:34.999820   54732 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0520 11:36:34.999919   54732 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:36:34.999937   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json: {Name:mkba1d4dc9330c5a6116d6f4725e66ffa908c3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:36:35.000083   54732 start.go:360] acquireMachinesLock for old-k8s-version-210420: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:36:35.000121   54732 start.go:364] duration metric: took 18.485µs to acquireMachinesLock for "old-k8s-version-210420"
	I0520 11:36:35.000143   54732 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:36:35.000199   54732 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 11:36:32.213407   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:36:32.250251   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:36:32.250338   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:36:32.304200   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:32.304221   49001 cri.go:89] found id: ""
	I0520 11:36:32.304229   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:36:32.304273   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:32.308986   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:36:32.309037   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:36:32.357314   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:32.357341   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:32.357347   49001 cri.go:89] found id: ""
	I0520 11:36:32.357356   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:36:32.357411   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:32.361860   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:32.367327   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:36:32.367396   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:36:32.412641   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:32.412665   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:36:32.412671   49001 cri.go:89] found id: ""
	I0520 11:36:32.412679   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:36:32.412731   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:32.418278   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:32.424053   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:36:32.424127   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:36:32.476759   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:32.476784   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:32.476790   49001 cri.go:89] found id: ""
	I0520 11:36:32.476798   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:36:32.476864   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:32.482879   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:32.488070   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:36:32.488136   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:36:32.536846   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:32.536873   49001 cri.go:89] found id: ""
	I0520 11:36:32.536882   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:36:32.536956   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:32.542665   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:36:32.542738   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:36:32.589229   49001 cri.go:89] found id: "56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181"
	I0520 11:36:32.589258   49001 cri.go:89] found id: ""
	I0520 11:36:32.589267   49001 logs.go:276] 1 containers: [56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181]
	I0520 11:36:32.589325   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:32.595068   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:36:32.595148   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:36:32.643847   49001 cri.go:89] found id: ""
	I0520 11:36:32.643881   49001 logs.go:276] 0 containers: []
	W0520 11:36:32.643891   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:36:32.643909   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:36:32.643922   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:32.697491   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:36:32.697517   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:32.745459   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:36:32.745497   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:36:32.876677   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:36:32.876727   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:36:32.969444   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:36:32.969471   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:36:32.969486   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:33.066555   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:36:33.066592   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:33.127311   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:36:33.127349   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:36:33.147217   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:36:33.147251   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:33.236370   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:36:33.236402   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:33.280897   49001 logs.go:123] Gathering logs for kube-controller-manager [56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181] ...
	I0520 11:36:33.280955   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181"
	I0520 11:36:33.333074   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:36:33.333101   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:36:33.377828   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:36:33.377869   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:33.425393   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:36:33.425420   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:36:33.462661   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:36:33Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:36:33Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:36:33.462684   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:36:33.462695   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:36:35.002725   54732 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 11:36:35.002882   54732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:36:35.002925   54732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:36:35.017943   54732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35777
	I0520 11:36:35.018361   54732 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:36:35.018933   54732 main.go:141] libmachine: Using API Version  1
	I0520 11:36:35.018957   54732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:36:35.019326   54732 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:36:35.019531   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:36:35.019673   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:36:35.019868   54732 start.go:159] libmachine.API.Create for "old-k8s-version-210420" (driver="kvm2")
	I0520 11:36:35.019898   54732 client.go:168] LocalClient.Create starting
	I0520 11:36:35.019933   54732 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 11:36:35.019975   54732 main.go:141] libmachine: Decoding PEM data...
	I0520 11:36:35.019998   54732 main.go:141] libmachine: Parsing certificate...
	I0520 11:36:35.020074   54732 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 11:36:35.020101   54732 main.go:141] libmachine: Decoding PEM data...
	I0520 11:36:35.020120   54732 main.go:141] libmachine: Parsing certificate...
	I0520 11:36:35.020144   54732 main.go:141] libmachine: Running pre-create checks...
	I0520 11:36:35.020156   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .PreCreateCheck
	I0520 11:36:35.020505   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:36:35.020895   54732 main.go:141] libmachine: Creating machine...
	I0520 11:36:35.020908   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .Create
	I0520 11:36:35.021045   54732 main.go:141] libmachine: (old-k8s-version-210420) Creating KVM machine...
	I0520 11:36:35.022377   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found existing default KVM network
	I0520 11:36:35.023751   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.023592   54755 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:d0:04} reservation:<nil>}
	I0520 11:36:35.024764   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.024687   54755 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:87:e9:79} reservation:<nil>}
	I0520 11:36:35.026003   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.025924   54755 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:e6:f9} reservation:<nil>}
	I0520 11:36:35.028479   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.028403   54755 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0520 11:36:35.029634   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.029565   54755 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014d70}
	I0520 11:36:35.029659   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | created network xml: 
	I0520 11:36:35.029667   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | <network>
	I0520 11:36:35.029675   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |   <name>mk-old-k8s-version-210420</name>
	I0520 11:36:35.029680   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |   <dns enable='no'/>
	I0520 11:36:35.029685   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |   
	I0520 11:36:35.029700   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0520 11:36:35.029714   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |     <dhcp>
	I0520 11:36:35.029726   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0520 11:36:35.029740   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |     </dhcp>
	I0520 11:36:35.029748   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |   </ip>
	I0520 11:36:35.029753   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |   
	I0520 11:36:35.029761   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | </network>
	I0520 11:36:35.029771   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | 
	I0520 11:36:35.035567   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | trying to create private KVM network mk-old-k8s-version-210420 192.168.83.0/24...
	I0520 11:36:35.110126   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420 ...
	I0520 11:36:35.110161   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | private KVM network mk-old-k8s-version-210420 192.168.83.0/24 created
	I0520 11:36:35.110185   54732 main.go:141] libmachine: (old-k8s-version-210420) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 11:36:35.110205   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.110056   54755 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:36:35.110259   54732 main.go:141] libmachine: (old-k8s-version-210420) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 11:36:35.361587   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.361434   54755 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa...
	I0520 11:36:35.721187   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.721073   54755 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/old-k8s-version-210420.rawdisk...
	I0520 11:36:35.721215   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Writing magic tar header
	I0520 11:36:35.721232   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Writing SSH key tar header
	I0520 11:36:35.721245   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.721174   54755 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420 ...
	I0520 11:36:35.721274   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420
	I0520 11:36:35.721313   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420 (perms=drwx------)
	I0520 11:36:35.721328   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 11:36:35.721339   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 11:36:35.721394   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 11:36:35.721426   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:36:35.721441   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 11:36:35.721457   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 11:36:35.721471   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 11:36:35.721482   54732 main.go:141] libmachine: (old-k8s-version-210420) Creating domain...
	I0520 11:36:35.721500   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 11:36:35.721512   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 11:36:35.721526   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home/jenkins
	I0520 11:36:35.721542   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home
	I0520 11:36:35.721571   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Skipping /home - not owner
	I0520 11:36:35.722625   54732 main.go:141] libmachine: (old-k8s-version-210420) define libvirt domain using xml: 
	I0520 11:36:35.722650   54732 main.go:141] libmachine: (old-k8s-version-210420) <domain type='kvm'>
	I0520 11:36:35.722662   54732 main.go:141] libmachine: (old-k8s-version-210420)   <name>old-k8s-version-210420</name>
	I0520 11:36:35.722671   54732 main.go:141] libmachine: (old-k8s-version-210420)   <memory unit='MiB'>2200</memory>
	I0520 11:36:35.722680   54732 main.go:141] libmachine: (old-k8s-version-210420)   <vcpu>2</vcpu>
	I0520 11:36:35.722688   54732 main.go:141] libmachine: (old-k8s-version-210420)   <features>
	I0520 11:36:35.722696   54732 main.go:141] libmachine: (old-k8s-version-210420)     <acpi/>
	I0520 11:36:35.722703   54732 main.go:141] libmachine: (old-k8s-version-210420)     <apic/>
	I0520 11:36:35.722713   54732 main.go:141] libmachine: (old-k8s-version-210420)     <pae/>
	I0520 11:36:35.722718   54732 main.go:141] libmachine: (old-k8s-version-210420)     
	I0520 11:36:35.722727   54732 main.go:141] libmachine: (old-k8s-version-210420)   </features>
	I0520 11:36:35.722732   54732 main.go:141] libmachine: (old-k8s-version-210420)   <cpu mode='host-passthrough'>
	I0520 11:36:35.722759   54732 main.go:141] libmachine: (old-k8s-version-210420)   
	I0520 11:36:35.722794   54732 main.go:141] libmachine: (old-k8s-version-210420)   </cpu>
	I0520 11:36:35.722804   54732 main.go:141] libmachine: (old-k8s-version-210420)   <os>
	I0520 11:36:35.722815   54732 main.go:141] libmachine: (old-k8s-version-210420)     <type>hvm</type>
	I0520 11:36:35.722833   54732 main.go:141] libmachine: (old-k8s-version-210420)     <boot dev='cdrom'/>
	I0520 11:36:35.722844   54732 main.go:141] libmachine: (old-k8s-version-210420)     <boot dev='hd'/>
	I0520 11:36:35.722854   54732 main.go:141] libmachine: (old-k8s-version-210420)     <bootmenu enable='no'/>
	I0520 11:36:35.722868   54732 main.go:141] libmachine: (old-k8s-version-210420)   </os>
	I0520 11:36:35.722881   54732 main.go:141] libmachine: (old-k8s-version-210420)   <devices>
	I0520 11:36:35.722888   54732 main.go:141] libmachine: (old-k8s-version-210420)     <disk type='file' device='cdrom'>
	I0520 11:36:35.722905   54732 main.go:141] libmachine: (old-k8s-version-210420)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/boot2docker.iso'/>
	I0520 11:36:35.722916   54732 main.go:141] libmachine: (old-k8s-version-210420)       <target dev='hdc' bus='scsi'/>
	I0520 11:36:35.722925   54732 main.go:141] libmachine: (old-k8s-version-210420)       <readonly/>
	I0520 11:36:35.722935   54732 main.go:141] libmachine: (old-k8s-version-210420)     </disk>
	I0520 11:36:35.722945   54732 main.go:141] libmachine: (old-k8s-version-210420)     <disk type='file' device='disk'>
	I0520 11:36:35.722963   54732 main.go:141] libmachine: (old-k8s-version-210420)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 11:36:35.722981   54732 main.go:141] libmachine: (old-k8s-version-210420)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/old-k8s-version-210420.rawdisk'/>
	I0520 11:36:35.722993   54732 main.go:141] libmachine: (old-k8s-version-210420)       <target dev='hda' bus='virtio'/>
	I0520 11:36:35.723005   54732 main.go:141] libmachine: (old-k8s-version-210420)     </disk>
	I0520 11:36:35.723016   54732 main.go:141] libmachine: (old-k8s-version-210420)     <interface type='network'>
	I0520 11:36:35.723033   54732 main.go:141] libmachine: (old-k8s-version-210420)       <source network='mk-old-k8s-version-210420'/>
	I0520 11:36:35.723048   54732 main.go:141] libmachine: (old-k8s-version-210420)       <model type='virtio'/>
	I0520 11:36:35.723059   54732 main.go:141] libmachine: (old-k8s-version-210420)     </interface>
	I0520 11:36:35.723073   54732 main.go:141] libmachine: (old-k8s-version-210420)     <interface type='network'>
	I0520 11:36:35.723090   54732 main.go:141] libmachine: (old-k8s-version-210420)       <source network='default'/>
	I0520 11:36:35.723098   54732 main.go:141] libmachine: (old-k8s-version-210420)       <model type='virtio'/>
	I0520 11:36:35.723103   54732 main.go:141] libmachine: (old-k8s-version-210420)     </interface>
	I0520 11:36:35.723110   54732 main.go:141] libmachine: (old-k8s-version-210420)     <serial type='pty'>
	I0520 11:36:35.723135   54732 main.go:141] libmachine: (old-k8s-version-210420)       <target port='0'/>
	I0520 11:36:35.723156   54732 main.go:141] libmachine: (old-k8s-version-210420)     </serial>
	I0520 11:36:35.723170   54732 main.go:141] libmachine: (old-k8s-version-210420)     <console type='pty'>
	I0520 11:36:35.723180   54732 main.go:141] libmachine: (old-k8s-version-210420)       <target type='serial' port='0'/>
	I0520 11:36:35.723193   54732 main.go:141] libmachine: (old-k8s-version-210420)     </console>
	I0520 11:36:35.723227   54732 main.go:141] libmachine: (old-k8s-version-210420)     <rng model='virtio'>
	I0520 11:36:35.723241   54732 main.go:141] libmachine: (old-k8s-version-210420)       <backend model='random'>/dev/random</backend>
	I0520 11:36:35.723251   54732 main.go:141] libmachine: (old-k8s-version-210420)     </rng>
	I0520 11:36:35.723261   54732 main.go:141] libmachine: (old-k8s-version-210420)     
	I0520 11:36:35.723269   54732 main.go:141] libmachine: (old-k8s-version-210420)     
	I0520 11:36:35.723279   54732 main.go:141] libmachine: (old-k8s-version-210420)   </devices>
	I0520 11:36:35.723290   54732 main.go:141] libmachine: (old-k8s-version-210420) </domain>
	I0520 11:36:35.723299   54732 main.go:141] libmachine: (old-k8s-version-210420) 
	I0520 11:36:35.727234   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:7a:d0:98 in network default
	I0520 11:36:35.727750   54732 main.go:141] libmachine: (old-k8s-version-210420) Ensuring networks are active...
	I0520 11:36:35.727768   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:35.728436   54732 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network default is active
	I0520 11:36:35.728679   54732 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network mk-old-k8s-version-210420 is active
	I0520 11:36:35.729118   54732 main.go:141] libmachine: (old-k8s-version-210420) Getting domain xml...
	I0520 11:36:35.729706   54732 main.go:141] libmachine: (old-k8s-version-210420) Creating domain...
	I0520 11:36:37.054514   54732 main.go:141] libmachine: (old-k8s-version-210420) Waiting to get IP...
	I0520 11:36:37.055222   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:37.055663   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:37.055693   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:37.055611   54755 retry.go:31] will retry after 250.742445ms: waiting for machine to come up
	I0520 11:36:37.308328   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:37.308967   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:37.308995   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:37.308910   54755 retry.go:31] will retry after 389.201353ms: waiting for machine to come up
	I0520 11:36:37.699518   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:37.699981   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:37.700011   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:37.699941   54755 retry.go:31] will retry after 435.785471ms: waiting for machine to come up
	I0520 11:36:38.137687   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:38.138161   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:38.138190   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:38.138123   54755 retry.go:31] will retry after 521.950828ms: waiting for machine to come up
	I0520 11:36:38.661976   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:38.662533   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:38.662556   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:38.662488   54755 retry.go:31] will retry after 628.547713ms: waiting for machine to come up
	I0520 11:36:39.292067   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:39.292558   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:39.292582   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:39.292508   54755 retry.go:31] will retry after 602.715223ms: waiting for machine to come up
	I0520 11:36:39.897344   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:39.897833   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:39.897862   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:39.897782   54755 retry.go:31] will retry after 880.165132ms: waiting for machine to come up
	I0520 11:36:36.299052   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:36:36.314512   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:36:36.314584   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:36:36.352255   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:36.352280   49001 cri.go:89] found id: ""
	I0520 11:36:36.352289   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:36:36.352339   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:36.358766   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:36:36.358830   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:36:36.406830   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:36.406854   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:36.406860   49001 cri.go:89] found id: ""
	I0520 11:36:36.406868   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:36:36.406922   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:36.411882   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:36.415719   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:36:36.415766   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:36:36.453714   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:36.453739   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:36:36.453746   49001 cri.go:89] found id: ""
	I0520 11:36:36.453754   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:36:36.453809   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:36.457884   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:36.461658   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:36:36.461715   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:36:36.500276   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:36.500298   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:36.500302   49001 cri.go:89] found id: ""
	I0520 11:36:36.500308   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:36:36.500356   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:36.504914   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:36.508902   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:36:36.508991   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:36:36.551461   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:36.551486   49001 cri.go:89] found id: ""
	I0520 11:36:36.551496   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:36:36.551543   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:36.556363   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:36:36.556414   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:36:36.611708   49001 cri.go:89] found id: "56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181"
	I0520 11:36:36.611736   49001 cri.go:89] found id: ""
	I0520 11:36:36.611745   49001 logs.go:276] 1 containers: [56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181]
	I0520 11:36:36.611809   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:36.617345   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:36:36.617416   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:36:36.661143   49001 cri.go:89] found id: ""
	I0520 11:36:36.661170   49001 logs.go:276] 0 containers: []
	W0520 11:36:36.661180   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:36:36.661197   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:36:36.661213   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:36:36.761429   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:36:36.761472   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:36:36.780474   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:36:36.780499   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:36.859767   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:36:36.859823   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:36.905082   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:36:36.905110   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:36.946186   49001 logs.go:123] Gathering logs for kube-controller-manager [56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181] ...
	I0520 11:36:36.946236   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181"
	I0520 11:36:36.986286   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:36:36.986311   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:37.064633   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:36:37.064661   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:37.098871   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:36:37.098905   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:36:37.140207   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:36:37.140240   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:36:37.211262   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:36:37.211284   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:36:37.211299   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:37.258302   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:36:37.258334   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:36:37.307638   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:36:37Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:36:37Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:36:37.307663   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:36:37.307677   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:37.351602   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:36:37.351635   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:36:40.161957   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:36:40.177547   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:36:40.177623   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:36:40.215710   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:40.215736   49001 cri.go:89] found id: ""
	I0520 11:36:40.215743   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:36:40.215802   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:40.220283   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:36:40.220351   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:36:40.264583   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:40.264611   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:40.264620   49001 cri.go:89] found id: ""
	I0520 11:36:40.264628   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:36:40.264689   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:40.269663   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:40.273799   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:36:40.273869   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:36:40.309787   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:40.309820   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:36:40.309827   49001 cri.go:89] found id: ""
	I0520 11:36:40.309836   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:36:40.309891   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:40.315229   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:40.319250   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:36:40.319314   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:36:40.356229   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:40.356252   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:40.356258   49001 cri.go:89] found id: ""
	I0520 11:36:40.356268   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:36:40.356327   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:40.360630   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:40.364893   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:36:40.364976   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:36:40.448372   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:40.448399   49001 cri.go:89] found id: ""
	I0520 11:36:40.448408   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:36:40.448453   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:40.453861   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:36:40.453928   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:36:40.496504   49001 cri.go:89] found id: "56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181"
	I0520 11:36:40.496526   49001 cri.go:89] found id: ""
	I0520 11:36:40.496536   49001 logs.go:276] 1 containers: [56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181]
	I0520 11:36:40.496588   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:40.502388   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:36:40.502453   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:36:40.579754   49001 cri.go:89] found id: ""
	I0520 11:36:40.579781   49001 logs.go:276] 0 containers: []
	W0520 11:36:40.579792   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:36:40.579809   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:36:40.579829   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:36:40.600060   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:36:40.600099   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:40.659324   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:36:40.659354   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:40.711144   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:36:40.711172   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:40.759359   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:36:40.759387   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:36:40.812205   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:36:40Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:36:40Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:36:40.812231   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:36:40.812246   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:40.779736   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:40.780219   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:40.780248   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:40.780178   54755 retry.go:31] will retry after 1.272197275s: waiting for machine to come up
	I0520 11:36:42.054726   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:42.055207   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:42.055229   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:42.055143   54755 retry.go:31] will retry after 1.67461889s: waiting for machine to come up
	I0520 11:36:43.731060   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:43.731544   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:43.731573   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:43.731493   54755 retry.go:31] will retry after 2.249865047s: waiting for machine to come up
	I0520 11:36:40.882067   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:36:40.882097   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:36:41.159148   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:36:41.159186   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:36:41.224360   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:36:41.224391   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:36:41.329535   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:36:41.329570   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:36:41.409867   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:36:41.409894   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:36:41.409910   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:41.522526   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:36:41.522568   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:41.562601   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:36:41.562634   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:41.600832   49001 logs.go:123] Gathering logs for kube-controller-manager [56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181] ...
	I0520 11:36:41.600859   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181"
	I0520 11:36:44.135809   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:36:44.153294   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:36:44.153352   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:36:44.193873   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:44.193897   49001 cri.go:89] found id: ""
	I0520 11:36:44.193904   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:36:44.193951   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:44.199576   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:36:44.199645   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:36:44.248935   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:44.248960   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:44.248966   49001 cri.go:89] found id: ""
	I0520 11:36:44.248980   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:36:44.249043   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:44.253792   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:44.258266   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:36:44.258332   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:36:44.294210   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:44.294235   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:36:44.294241   49001 cri.go:89] found id: ""
	I0520 11:36:44.294249   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:36:44.294308   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:44.298621   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:44.303046   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:36:44.303109   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:36:44.348577   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:44.348597   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:44.348601   49001 cri.go:89] found id: ""
	I0520 11:36:44.348607   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:36:44.348655   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:44.354268   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:44.359483   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:36:44.359546   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:36:44.399807   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:44.399846   49001 cri.go:89] found id: ""
	I0520 11:36:44.399855   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:36:44.399917   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:44.404573   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:36:44.404644   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:36:44.461292   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:36:44.461334   49001 cri.go:89] found id: "56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181"
	I0520 11:36:44.461338   49001 cri.go:89] found id: ""
	I0520 11:36:44.461344   49001 logs.go:276] 2 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590 56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181]
	I0520 11:36:44.461390   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:44.466918   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:44.471001   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:36:44.471071   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:36:44.508731   49001 cri.go:89] found id: ""
	I0520 11:36:44.508759   49001 logs.go:276] 0 containers: []
	W0520 11:36:44.508769   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:36:44.508785   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:36:44.508801   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:36:44.523394   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:36:44.523423   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:44.605073   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:36:44.605106   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:36:44.643200   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:36:44Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:36:44Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:36:44.643222   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:36:44.643238   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:44.744739   49001 logs.go:123] Gathering logs for kube-controller-manager [56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181] ...
	I0520 11:36:44.744784   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181"
	I0520 11:36:44.782733   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:36:44.782763   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:36:44.873829   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:36:44.873872   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:44.915126   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:36:44.915160   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:44.961357   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:36:44.961391   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:36:45.008396   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:36:45.008424   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:36:45.094634   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:36:45.094668   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:36:45.094685   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:45.148244   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:36:45.148282   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:45.191406   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:36:45.191446   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:45.239274   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:36:45.239311   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:36:45.283303   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:36:45.283339   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:36:45.610156   54111 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 7b42898f99958accb840c716991850ad66e348a51b0d00f34aae4edf29e006d1 02159dda2a4e58d7ea9a968ef3888cf65c95a6fa85096fb9b723bbfe0d63864c 9af43faba0f21599fdd29d10bb2b2c4dd52e201c1b0e9ce292c5bf909c9f62c5 13bd7a280593139ee44caab759fa6148f13830f19eeaca3ba02bf84e3d63eefe 65c431c2ac8297be6bed1c804289da8491c399793d000eb051d3787ddced887c c20c4bafb98885b08fdbc9e9f2fd0ab46ae2831a0f95760c0e24181408f6bc85 5d497401ccbcb478f6e72447a825bd8622da7b16c69c054ed4a20415a86a7b2a 5f63a80281ff77de43fffe1856568d96eb2b8fb8a3b21a3bca790279affd1637 91acdfbe85ec9ba255dc4313697fa7acefc88f5fdf8d51fcf07725652e3dead5 a5234daff9724969ca47bb9b6f877bcb2f0ffb6fceb29de9577260414384f0ae: (20.171978102s)
	W0520 11:36:45.610237   54111 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 7b42898f99958accb840c716991850ad66e348a51b0d00f34aae4edf29e006d1 02159dda2a4e58d7ea9a968ef3888cf65c95a6fa85096fb9b723bbfe0d63864c 9af43faba0f21599fdd29d10bb2b2c4dd52e201c1b0e9ce292c5bf909c9f62c5 13bd7a280593139ee44caab759fa6148f13830f19eeaca3ba02bf84e3d63eefe 65c431c2ac8297be6bed1c804289da8491c399793d000eb051d3787ddced887c c20c4bafb98885b08fdbc9e9f2fd0ab46ae2831a0f95760c0e24181408f6bc85 5d497401ccbcb478f6e72447a825bd8622da7b16c69c054ed4a20415a86a7b2a 5f63a80281ff77de43fffe1856568d96eb2b8fb8a3b21a3bca790279affd1637 91acdfbe85ec9ba255dc4313697fa7acefc88f5fdf8d51fcf07725652e3dead5 a5234daff9724969ca47bb9b6f877bcb2f0ffb6fceb29de9577260414384f0ae: Process exited with status 1
	stdout:
	7b42898f99958accb840c716991850ad66e348a51b0d00f34aae4edf29e006d1
	02159dda2a4e58d7ea9a968ef3888cf65c95a6fa85096fb9b723bbfe0d63864c
	9af43faba0f21599fdd29d10bb2b2c4dd52e201c1b0e9ce292c5bf909c9f62c5
	13bd7a280593139ee44caab759fa6148f13830f19eeaca3ba02bf84e3d63eefe
	
	stderr:
	E0520 11:36:45.579299    2742 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65c431c2ac8297be6bed1c804289da8491c399793d000eb051d3787ddced887c\": container with ID starting with 65c431c2ac8297be6bed1c804289da8491c399793d000eb051d3787ddced887c not found: ID does not exist" containerID="65c431c2ac8297be6bed1c804289da8491c399793d000eb051d3787ddced887c"
	time="2024-05-20T11:36:45Z" level=fatal msg="stopping the container \"65c431c2ac8297be6bed1c804289da8491c399793d000eb051d3787ddced887c\": rpc error: code = NotFound desc = could not find container \"65c431c2ac8297be6bed1c804289da8491c399793d000eb051d3787ddced887c\": container with ID starting with 65c431c2ac8297be6bed1c804289da8491c399793d000eb051d3787ddced887c not found: ID does not exist"
	I0520 11:36:45.610325   54111 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:36:45.656800   54111 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:36:45.671477   54111 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 May 20 11:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 May 20 11:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 May 20 11:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 May 20 11:35 /etc/kubernetes/scheduler.conf
	
	I0520 11:36:45.671541   54111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:36:45.683756   54111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:36:45.694531   54111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:36:45.703703   54111 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:36:45.703773   54111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:36:45.718221   54111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:36:45.729387   54111 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:36:45.729459   54111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:36:45.742489   54111 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:36:45.756578   54111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:36:45.813763   54111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:36:45.983452   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:45.984016   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:45.984068   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:45.983968   54755 retry.go:31] will retry after 2.737409087s: waiting for machine to come up
	I0520 11:36:48.722775   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:48.723225   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:48.723255   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:48.723183   54755 retry.go:31] will retry after 2.212815155s: waiting for machine to come up
	I0520 11:36:48.085597   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:36:48.099344   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:36:48.099407   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:36:48.134766   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:48.134788   49001 cri.go:89] found id: ""
	I0520 11:36:48.134795   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:36:48.134845   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:48.138836   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:36:48.138939   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:36:48.176479   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:48.176503   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:48.176508   49001 cri.go:89] found id: ""
	I0520 11:36:48.176516   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:36:48.176571   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:48.180620   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:48.184391   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:36:48.184464   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:36:48.219768   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:48.219795   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:36:48.219801   49001 cri.go:89] found id: ""
	I0520 11:36:48.219808   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:36:48.219881   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:48.223903   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:48.227695   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:36:48.227759   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:36:48.261735   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:48.261755   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:48.261758   49001 cri.go:89] found id: ""
	I0520 11:36:48.261764   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:36:48.261808   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:48.265927   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:48.270049   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:36:48.270110   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:36:48.308396   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:48.308424   49001 cri.go:89] found id: ""
	I0520 11:36:48.308434   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:36:48.308496   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:48.313148   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:36:48.313216   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:36:48.354159   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:36:48.354187   49001 cri.go:89] found id: "56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181"
	I0520 11:36:48.354192   49001 cri.go:89] found id: ""
	I0520 11:36:48.354201   49001 logs.go:276] 2 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590 56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181]
	I0520 11:36:48.354260   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:48.358562   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:48.362618   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:36:48.362670   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:36:48.405756   49001 cri.go:89] found id: ""
	I0520 11:36:48.405783   49001 logs.go:276] 0 containers: []
	W0520 11:36:48.405794   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:36:48.405810   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:36:48.405825   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:48.496360   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:36:48.496391   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:36:48.536707   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:36:48.536733   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:48.581720   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:36:48.581750   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:48.626227   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:36:48.626262   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:48.674160   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:36:48.674191   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:48.709845   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:36:48.709878   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:48.778808   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:36:48.778844   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:36:48.814224   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:36:48Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:36:48Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:36:48.814247   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:36:48.814262   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:48.852159   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:36:48.852191   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:36:48.868132   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:36:48.868158   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:36:48.911427   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:36:48.911457   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:36:49.191687   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:36:49.191732   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:36:49.286367   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:36:49.286403   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:36:49.360577   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:36:49.360602   49001 logs.go:123] Gathering logs for kube-controller-manager [56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181] ...
	I0520 11:36:49.360617   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56bd86cfc40c317cb04070817b4b2989f707d6741dd189902a48b2a9e0899181"
	I0520 11:36:47.431681   54111 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.617882594s)
	I0520 11:36:47.431710   54111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:36:47.679317   54111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:36:47.748971   54111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:36:47.844079   54111 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:36:47.844178   54111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:36:48.344227   54111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:36:48.844543   54111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:36:48.862783   54111 api_server.go:72] duration metric: took 1.018705474s to wait for apiserver process to appear ...
	I0520 11:36:48.862819   54111 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:36:48.862840   54111 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0520 11:36:51.132907   54111 api_server.go:279] https://192.168.39.87:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:36:51.132956   54111 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:36:51.132969   54111 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0520 11:36:51.175598   54111 api_server.go:279] https://192.168.39.87:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:36:51.175638   54111 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:36:51.363908   54111 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0520 11:36:51.368477   54111 api_server.go:279] https://192.168.39.87:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:36:51.368499   54111 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:36:51.863653   54111 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0520 11:36:51.870068   54111 api_server.go:279] https://192.168.39.87:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:36:51.870092   54111 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:36:52.363896   54111 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0520 11:36:52.370082   54111 api_server.go:279] https://192.168.39.87:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:36:52.370114   54111 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:36:52.863749   54111 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0520 11:36:52.867961   54111 api_server.go:279] https://192.168.39.87:8443/healthz returned 200:
	ok
	I0520 11:36:52.874577   54111 api_server.go:141] control plane version: v1.30.1
	I0520 11:36:52.874603   54111 api_server.go:131] duration metric: took 4.011776991s to wait for apiserver health ...
	I0520 11:36:52.874613   54111 cni.go:84] Creating CNI manager for ""
	I0520 11:36:52.874621   54111 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:36:52.876337   54111 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:36:52.877826   54111 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:36:52.892182   54111 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:36:52.912680   54111 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:36:52.923817   54111 system_pods.go:59] 8 kube-system pods found
	I0520 11:36:52.923850   54111 system_pods.go:61] "coredns-7db6d8ff4d-lcrlw" [e6d92ee8-7bcf-41d4-8763-ffda61451835] Running
	I0520 11:36:52.923857   54111 system_pods.go:61] "coredns-7db6d8ff4d-tbwm4" [6a1a1123-c554-4b5e-92c7-850e35340002] Running
	I0520 11:36:52.923866   54111 system_pods.go:61] "etcd-kubernetes-upgrade-854547" [2228ac93-b846-4755-ab0b-fa1e99f40d8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:36:52.923880   54111 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-854547" [8bf65c47-6c7e-4fc2-8950-05c427637c92] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:36:52.923895   54111 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-854547" [4ce5d73d-8cd2-4119-bc47-36cc7bd84f8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:36:52.923905   54111 system_pods.go:61] "kube-proxy-wdk8j" [de31fbca-6c00-409c-a231-2b6111ba7505] Running
	I0520 11:36:52.923913   54111 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-854547" [8a351311-cc8d-4de7-a93a-a75667e367c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:36:52.923925   54111 system_pods.go:61] "storage-provisioner" [31250cf8-fdb8-4df6-bd46-5ba2298421c2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 11:36:52.923937   54111 system_pods.go:74] duration metric: took 11.229516ms to wait for pod list to return data ...
	I0520 11:36:52.923948   54111 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:36:52.928120   54111 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:36:52.928147   54111 node_conditions.go:123] node cpu capacity is 2
	I0520 11:36:52.928158   54111 node_conditions.go:105] duration metric: took 4.201613ms to run NodePressure ...
	I0520 11:36:52.928175   54111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:36:53.248825   54111 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:36:53.263094   54111 ops.go:34] apiserver oom_adj: -16
	I0520 11:36:53.263115   54111 kubeadm.go:591] duration metric: took 27.937621644s to restartPrimaryControlPlane
	I0520 11:36:53.263124   54111 kubeadm.go:393] duration metric: took 28.095140423s to StartCluster
	I0520 11:36:53.263138   54111 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:36:53.263201   54111 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:36:53.264263   54111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:36:53.264467   54111 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:36:53.266071   54111 out.go:177] * Verifying Kubernetes components...
	I0520 11:36:53.264546   54111 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:36:53.264667   54111 config.go:182] Loaded profile config "kubernetes-upgrade-854547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:36:53.267540   54111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:36:53.266106   54111 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-854547"
	I0520 11:36:53.267589   54111 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-854547"
	W0520 11:36:53.267602   54111 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:36:53.267629   54111 host.go:66] Checking if "kubernetes-upgrade-854547" exists ...
	I0520 11:36:53.266117   54111 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-854547"
	I0520 11:36:53.267681   54111 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-854547"
	I0520 11:36:53.267977   54111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:36:53.268000   54111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:36:53.268014   54111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:36:53.268028   54111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:36:53.288086   54111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46837
	I0520 11:36:53.288449   54111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37135
	I0520 11:36:53.288541   54111 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:36:53.288838   54111 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:36:53.289066   54111 main.go:141] libmachine: Using API Version  1
	I0520 11:36:53.289088   54111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:36:53.289328   54111 main.go:141] libmachine: Using API Version  1
	I0520 11:36:53.289353   54111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:36:53.289437   54111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:36:53.289665   54111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:36:53.289835   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetState
	I0520 11:36:53.290006   54111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:36:53.290053   54111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:36:53.292811   54111 kapi.go:59] client config for kubernetes-upgrade-854547: &rest.Config{Host:"https://192.168.39.87:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/client.crt", KeyFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kubernetes-upgrade-854547/client.key", CAFile:"/home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0520 11:36:53.293164   54111 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-854547"
	W0520 11:36:53.293190   54111 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:36:53.293217   54111 host.go:66] Checking if "kubernetes-upgrade-854547" exists ...
	I0520 11:36:53.293562   54111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:36:53.293604   54111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:36:53.305627   54111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I0520 11:36:53.306045   54111 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:36:53.306450   54111 main.go:141] libmachine: Using API Version  1
	I0520 11:36:53.306470   54111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:36:53.306757   54111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:36:53.306973   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetState
	I0520 11:36:53.308520   54111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42529
	I0520 11:36:53.308834   54111 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:36:53.308947   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .DriverName
	I0520 11:36:53.311104   54111 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:36:53.309404   54111 main.go:141] libmachine: Using API Version  1
	I0520 11:36:53.312351   54111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:36:53.312447   54111 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:36:53.312467   54111 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:36:53.312484   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:36:53.313262   54111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:36:53.313844   54111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:36:53.313903   54111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:36:53.315555   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:36:53.316000   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:35:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:36:53.316020   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:36:53.316172   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHPort
	I0520 11:36:53.316337   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:36:53.316528   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHUsername
	I0520 11:36:53.316646   54111 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547/id_rsa Username:docker}
	I0520 11:36:53.329606   54111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44605
	I0520 11:36:53.329994   54111 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:36:53.330371   54111 main.go:141] libmachine: Using API Version  1
	I0520 11:36:53.330386   54111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:36:53.330691   54111 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:36:53.330870   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetState
	I0520 11:36:53.332228   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .DriverName
	I0520 11:36:53.332412   54111 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:36:53.332433   54111 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:36:53.332448   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHHostname
	I0520 11:36:53.335038   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:36:53.335443   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ec:cf", ip: ""} in network mk-kubernetes-upgrade-854547: {Iface:virbr1 ExpiryTime:2024-05-20 12:35:14 +0000 UTC Type:0 Mac:52:54:00:60:ec:cf Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-854547 Clientid:01:52:54:00:60:ec:cf}
	I0520 11:36:53.335463   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | domain kubernetes-upgrade-854547 has defined IP address 192.168.39.87 and MAC address 52:54:00:60:ec:cf in network mk-kubernetes-upgrade-854547
	I0520 11:36:53.335587   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHPort
	I0520 11:36:53.335743   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHKeyPath
	I0520 11:36:53.335874   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .GetSSHUsername
	I0520 11:36:53.335972   54111 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kubernetes-upgrade-854547/id_rsa Username:docker}
	I0520 11:36:53.478008   54111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:36:53.493764   54111 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:36:53.493840   54111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:36:53.509251   54111 api_server.go:72] duration metric: took 244.757979ms to wait for apiserver process to appear ...
	I0520 11:36:53.509276   54111 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:36:53.509302   54111 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0520 11:36:53.514407   54111 api_server.go:279] https://192.168.39.87:8443/healthz returned 200:
	ok
	I0520 11:36:53.515211   54111 api_server.go:141] control plane version: v1.30.1
	I0520 11:36:53.515228   54111 api_server.go:131] duration metric: took 5.946882ms to wait for apiserver health ...
	I0520 11:36:53.515235   54111 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:36:53.520422   54111 system_pods.go:59] 8 kube-system pods found
	I0520 11:36:53.520441   54111 system_pods.go:61] "coredns-7db6d8ff4d-lcrlw" [e6d92ee8-7bcf-41d4-8763-ffda61451835] Running
	I0520 11:36:53.520446   54111 system_pods.go:61] "coredns-7db6d8ff4d-tbwm4" [6a1a1123-c554-4b5e-92c7-850e35340002] Running
	I0520 11:36:53.520451   54111 system_pods.go:61] "etcd-kubernetes-upgrade-854547" [2228ac93-b846-4755-ab0b-fa1e99f40d8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:36:53.520458   54111 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-854547" [8bf65c47-6c7e-4fc2-8950-05c427637c92] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:36:53.520467   54111 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-854547" [4ce5d73d-8cd2-4119-bc47-36cc7bd84f8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:36:53.520474   54111 system_pods.go:61] "kube-proxy-wdk8j" [de31fbca-6c00-409c-a231-2b6111ba7505] Running
	I0520 11:36:53.520483   54111 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-854547" [8a351311-cc8d-4de7-a93a-a75667e367c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:36:53.520492   54111 system_pods.go:61] "storage-provisioner" [31250cf8-fdb8-4df6-bd46-5ba2298421c2] Running
	I0520 11:36:53.520500   54111 system_pods.go:74] duration metric: took 5.259018ms to wait for pod list to return data ...
	I0520 11:36:53.520512   54111 kubeadm.go:576] duration metric: took 256.021202ms to wait for: map[apiserver:true system_pods:true]
	I0520 11:36:53.520526   54111 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:36:53.524769   54111 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:36:53.524787   54111 node_conditions.go:123] node cpu capacity is 2
	I0520 11:36:53.524795   54111 node_conditions.go:105] duration metric: took 4.265197ms to run NodePressure ...
	I0520 11:36:53.524805   54111 start.go:240] waiting for startup goroutines ...
	I0520 11:36:53.588072   54111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:36:53.605074   54111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:36:54.235240   54111 main.go:141] libmachine: Making call to close driver server
	I0520 11:36:54.235264   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .Close
	I0520 11:36:54.235311   54111 main.go:141] libmachine: Making call to close driver server
	I0520 11:36:54.235333   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .Close
	I0520 11:36:54.235604   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Closing plugin on server side
	I0520 11:36:54.235640   54111 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:36:54.235648   54111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:36:54.235658   54111 main.go:141] libmachine: Making call to close driver server
	I0520 11:36:54.235668   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .Close
	I0520 11:36:54.235705   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Closing plugin on server side
	I0520 11:36:54.235717   54111 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:36:54.235749   54111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:36:54.235804   54111 main.go:141] libmachine: Making call to close driver server
	I0520 11:36:54.235831   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .Close
	I0520 11:36:54.235861   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Closing plugin on server side
	I0520 11:36:54.235890   54111 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:36:54.235905   54111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:36:54.236019   54111 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:36:54.236035   54111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:36:54.241353   54111 main.go:141] libmachine: Making call to close driver server
	I0520 11:36:54.241369   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) Calling .Close
	I0520 11:36:54.241572   54111 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:36:54.241587   54111 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:36:54.241594   54111 main.go:141] libmachine: (kubernetes-upgrade-854547) DBG | Closing plugin on server side
	I0520 11:36:54.243591   54111 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 11:36:54.244794   54111 addons.go:505] duration metric: took 980.252206ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 11:36:54.244824   54111 start.go:245] waiting for cluster config update ...
	I0520 11:36:54.244834   54111 start.go:254] writing updated cluster config ...
	I0520 11:36:54.245078   54111 ssh_runner.go:195] Run: rm -f paused
	I0520 11:36:54.292667   54111 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:36:54.294604   54111 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-854547" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 11:36:54 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:54.990996634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716205014990970785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=737910c5-f338-487d-a291-0f359b930c43 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:36:54 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:54.991533137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c6fba36-c36a-4377-9ffe-8eb5a5fcb498 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:36:54 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:54.991610562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c6fba36-c36a-4377-9ffe-8eb5a5fcb498 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:36:54 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:54.992328747Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc6985f03c871ff25faa3663996aa309231cc84e5541f6468e3aa89cce628a02,PodSandboxId:33e9b18db63e5fbb5b7542bb0aa3e805cd9c029995adf0a4b95767f1ea526471,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205012099464483,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31250cf8-fdb8-4df6-bd46-5ba2298421c2,},Annotations:map[string]string{io.kubernetes.container.hash: df36b11b,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2db3feb00e0ca9266e946128a674254a14a5c99d9ae43923fe49b8f8a0c740,PodSandboxId:d73347b679742b50fbf1991cb3fe1b8a3b71f477a59e642795b7a8b2e1422ffb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205008292607334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61277b1be79cee82ca0bd824195bb6a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f5437cf9e02c46b6e4e6ada3614c0c0d04dc8dbadff980ccddae44b17db4fd,PodSandboxId:d78f59cb48463b20d00e1bf49d23c54f1be318141a2b1a284d1960c4b95ae0fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205008277277145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8101483dcdd60205f8c929b06193072,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6033ad13b35a96ae8bf893b867c548e7f522537c168b72f2bad54d0bb6d93fb2,PodSandboxId:f84792a0ed5e641484ef735fa58b3ef2246033ea10bfb3342cd3cd8289722d5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205008281942729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc7686875a03d2e4e74acbb72c14b0d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac36840,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8839006729233a5d1b8ebe2f69d9f26db459df3c0a677c9af2b3fd2fb453d89,PodSandboxId:33e9b18db63e5fbb5b7542bb0aa3e805cd9c029995adf0a4b95767f1ea526471,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716204998160122971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31250cf8-fdb8-4df6-bd46-5ba2298421c2,},Annotations:map[string]string{io.kubernetes.container.hash: df36b11b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650dc968a6942f1077c646df6088c6fda678e029d2f4d359a6f9f4dbd0abd96f,PodSandboxId:06b870e0b6f2fa51b4345068b96a7b2ea7f40762e8aaea660475b70befdc5b64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716204996144955919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdk8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de31fbca-6c00-409c-a231-2b6111ba7505,},Annotations:map[string]string{io.kubernetes.container.hash: d228ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41199e5e56c1a417ab05ce7dbd96e44f23d229c54f0de23270a3866b282a60cb,PodSandboxId:d73347b679742b50fbf1991cb3fe1b8a3b71f477a59e642795b7a8b2e1422ffb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716204992517715353,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61277b1be79cee82ca0bd824195bb6a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edabd0ea698bd9a316df81e484f6d171b1288028aad6c0bef7f506bbfa2922b3,PodSandboxId:bc2fdef3e001d5e1f856a1ffe187efb009d8f03ec57ec7228d41dc43341fff01,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716204992417301098,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tbwm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a1123-c554-4b5e-92c7-850e35340002,},Annotations:map[string]string{io.kubernetes.container.hash: 868d5bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b060a884248a17b961f62a3c026481577301c7a1adacae456bd92d55577b1c,PodSandboxId:10f6c8b9c72501b2adb72bceaf0670708456fd48b9395be26da369dcd2795778,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716204992203498903,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lcrlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d92ee
8-7bcf-41d4-8763-ffda61451835,},Annotations:map[string]string{io.kubernetes.container.hash: b922d0ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca5c59540da9db0c4efa557d0c19ba9f3d099c44ed660404b3e336b8bd7ac87,PodSandboxId:1eda3823bf29b852b58eccae9d6f5be02349f4c9d1f3f28fcbf2e146c50a7abf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17162049904
50906709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14caa6916f14541fee0083cd26e0116b,},Annotations:map[string]string{io.kubernetes.container.hash: 78b95af0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b42898f99958accb840c716991850ad66e348a51b0d00f34aae4edf29e006d1,PodSandboxId:d78f59cb48463b20d00e1bf49d23c54f1be318141a2b1a284d1960c4b95ae0fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716204984628901075,Labels:map[string
]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8101483dcdd60205f8c929b06193072,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02159dda2a4e58d7ea9a968ef3888cf65c95a6fa85096fb9b723bbfe0d63864c,PodSandboxId:f84792a0ed5e641484ef735fa58b3ef2246033ea10bfb3342cd3cd8289722d5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716204984580250106,Labels:map[string]strin
g{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc7686875a03d2e4e74acbb72c14b0d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac36840,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9af43faba0f21599fdd29d10bb2b2c4dd52e201c1b0e9ce292c5bf909c9f62c5,PodSandboxId:0c44cde2cf4ffbda64ecf1fcddbd92cb753a89776e403235f2c82c8037f9505d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716204952539449067,Labels:map[string]string{io.kubernet
es.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lcrlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d92ee8-7bcf-41d4-8763-ffda61451835,},Annotations:map[string]string{io.kubernetes.container.hash: b922d0ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13bd7a280593139ee44caab759fa6148f13830f19eeaca3ba02bf84e3d63eefe,PodSandboxId:ab9cf285a20803e60ab12fdaf453f6665770149765479c6e7d6df1c48667a09e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716204952510882345,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tbwm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a1123-c554-4b5e-92c7-850e35340002,},Annotations:map[string]string{io.kubernetes.container.hash: 868d5bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20c4bafb98885b08fdbc9e9f2fd0ab46ae2831a0f95760c0e24181408f6bc85,PodSandboxId:f30afae4a47c5c2f0ef49f67c18c259e3c0bec80ab819cde3626
1bfcdbdc5efa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716204951745980379,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdk8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de31fbca-6c00-409c-a231-2b6111ba7505,},Annotations:map[string]string{io.kubernetes.container.hash: d228ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d497401ccbcb478f6e72447a825bd8622da7b16c69c054ed4a20415a86a7b2a,PodSandboxId:54ca63588c2c8d912083dd87dea295af6e56667f2c9547f83406ebb5baaa42ad,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716204932336146974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14caa6916f14541fee0083cd26e0116b,},Annotations:map[string]string{io.kubernetes.container.hash: 78b95af0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c6fba36-c36a-4377-9ffe-8eb5a5fcb498 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.034219878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5854270b-4214-4407-b0c8-105c4c556716 name=/runtime.v1.RuntimeService/Version
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.034289287Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5854270b-4214-4407-b0c8-105c4c556716 name=/runtime.v1.RuntimeService/Version
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.035265641Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6fe414d1-ab05-45a4-860c-675c028abbb2 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.035758806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716205015035730746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fe414d1-ab05-45a4-860c-675c028abbb2 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.036614670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54190dc9-d3ae-4616-a4a8-2c65df3ec371 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.036698287Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54190dc9-d3ae-4616-a4a8-2c65df3ec371 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.037047708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc6985f03c871ff25faa3663996aa309231cc84e5541f6468e3aa89cce628a02,PodSandboxId:33e9b18db63e5fbb5b7542bb0aa3e805cd9c029995adf0a4b95767f1ea526471,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205012099464483,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31250cf8-fdb8-4df6-bd46-5ba2298421c2,},Annotations:map[string]string{io.kubernetes.container.hash: df36b11b,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2db3feb00e0ca9266e946128a674254a14a5c99d9ae43923fe49b8f8a0c740,PodSandboxId:d73347b679742b50fbf1991cb3fe1b8a3b71f477a59e642795b7a8b2e1422ffb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205008292607334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61277b1be79cee82ca0bd824195bb6a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f5437cf9e02c46b6e4e6ada3614c0c0d04dc8dbadff980ccddae44b17db4fd,PodSandboxId:d78f59cb48463b20d00e1bf49d23c54f1be318141a2b1a284d1960c4b95ae0fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205008277277145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8101483dcdd60205f8c929b06193072,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6033ad13b35a96ae8bf893b867c548e7f522537c168b72f2bad54d0bb6d93fb2,PodSandboxId:f84792a0ed5e641484ef735fa58b3ef2246033ea10bfb3342cd3cd8289722d5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205008281942729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc7686875a03d2e4e74acbb72c14b0d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac36840,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8839006729233a5d1b8ebe2f69d9f26db459df3c0a677c9af2b3fd2fb453d89,PodSandboxId:33e9b18db63e5fbb5b7542bb0aa3e805cd9c029995adf0a4b95767f1ea526471,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716204998160122971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31250cf8-fdb8-4df6-bd46-5ba2298421c2,},Annotations:map[string]string{io.kubernetes.container.hash: df36b11b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650dc968a6942f1077c646df6088c6fda678e029d2f4d359a6f9f4dbd0abd96f,PodSandboxId:06b870e0b6f2fa51b4345068b96a7b2ea7f40762e8aaea660475b70befdc5b64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716204996144955919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdk8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de31fbca-6c00-409c-a231-2b6111ba7505,},Annotations:map[string]string{io.kubernetes.container.hash: d228ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41199e5e56c1a417ab05ce7dbd96e44f23d229c54f0de23270a3866b282a60cb,PodSandboxId:d73347b679742b50fbf1991cb3fe1b8a3b71f477a59e642795b7a8b2e1422ffb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716204992517715353,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61277b1be79cee82ca0bd824195bb6a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edabd0ea698bd9a316df81e484f6d171b1288028aad6c0bef7f506bbfa2922b3,PodSandboxId:bc2fdef3e001d5e1f856a1ffe187efb009d8f03ec57ec7228d41dc43341fff01,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716204992417301098,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tbwm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a1123-c554-4b5e-92c7-850e35340002,},Annotations:map[string]string{io.kubernetes.container.hash: 868d5bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b060a884248a17b961f62a3c026481577301c7a1adacae456bd92d55577b1c,PodSandboxId:10f6c8b9c72501b2adb72bceaf0670708456fd48b9395be26da369dcd2795778,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716204992203498903,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lcrlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d92ee
8-7bcf-41d4-8763-ffda61451835,},Annotations:map[string]string{io.kubernetes.container.hash: b922d0ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca5c59540da9db0c4efa557d0c19ba9f3d099c44ed660404b3e336b8bd7ac87,PodSandboxId:1eda3823bf29b852b58eccae9d6f5be02349f4c9d1f3f28fcbf2e146c50a7abf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17162049904
50906709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14caa6916f14541fee0083cd26e0116b,},Annotations:map[string]string{io.kubernetes.container.hash: 78b95af0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b42898f99958accb840c716991850ad66e348a51b0d00f34aae4edf29e006d1,PodSandboxId:d78f59cb48463b20d00e1bf49d23c54f1be318141a2b1a284d1960c4b95ae0fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716204984628901075,Labels:map[string
]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8101483dcdd60205f8c929b06193072,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02159dda2a4e58d7ea9a968ef3888cf65c95a6fa85096fb9b723bbfe0d63864c,PodSandboxId:f84792a0ed5e641484ef735fa58b3ef2246033ea10bfb3342cd3cd8289722d5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716204984580250106,Labels:map[string]strin
g{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc7686875a03d2e4e74acbb72c14b0d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac36840,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9af43faba0f21599fdd29d10bb2b2c4dd52e201c1b0e9ce292c5bf909c9f62c5,PodSandboxId:0c44cde2cf4ffbda64ecf1fcddbd92cb753a89776e403235f2c82c8037f9505d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716204952539449067,Labels:map[string]string{io.kubernet
es.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lcrlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d92ee8-7bcf-41d4-8763-ffda61451835,},Annotations:map[string]string{io.kubernetes.container.hash: b922d0ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13bd7a280593139ee44caab759fa6148f13830f19eeaca3ba02bf84e3d63eefe,PodSandboxId:ab9cf285a20803e60ab12fdaf453f6665770149765479c6e7d6df1c48667a09e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716204952510882345,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tbwm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a1123-c554-4b5e-92c7-850e35340002,},Annotations:map[string]string{io.kubernetes.container.hash: 868d5bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20c4bafb98885b08fdbc9e9f2fd0ab46ae2831a0f95760c0e24181408f6bc85,PodSandboxId:f30afae4a47c5c2f0ef49f67c18c259e3c0bec80ab819cde3626
1bfcdbdc5efa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716204951745980379,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdk8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de31fbca-6c00-409c-a231-2b6111ba7505,},Annotations:map[string]string{io.kubernetes.container.hash: d228ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d497401ccbcb478f6e72447a825bd8622da7b16c69c054ed4a20415a86a7b2a,PodSandboxId:54ca63588c2c8d912083dd87dea295af6e56667f2c9547f83406ebb5baaa42ad,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716204932336146974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14caa6916f14541fee0083cd26e0116b,},Annotations:map[string]string{io.kubernetes.container.hash: 78b95af0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54190dc9-d3ae-4616-a4a8-2c65df3ec371 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.079369758Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9dded9af-9c2d-4792-86ea-1a7cc508a9c0 name=/runtime.v1.RuntimeService/Version
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.079479528Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9dded9af-9c2d-4792-86ea-1a7cc508a9c0 name=/runtime.v1.RuntimeService/Version
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.080697283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07fc1d47-5c18-43ea-8c25-cdaea24679ce name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.081061924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716205015081039982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07fc1d47-5c18-43ea-8c25-cdaea24679ce name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.081743431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84112f46-9327-4182-893c-e07870f150b7 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.081811294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84112f46-9327-4182-893c-e07870f150b7 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.084978105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc6985f03c871ff25faa3663996aa309231cc84e5541f6468e3aa89cce628a02,PodSandboxId:33e9b18db63e5fbb5b7542bb0aa3e805cd9c029995adf0a4b95767f1ea526471,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205012099464483,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31250cf8-fdb8-4df6-bd46-5ba2298421c2,},Annotations:map[string]string{io.kubernetes.container.hash: df36b11b,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2db3feb00e0ca9266e946128a674254a14a5c99d9ae43923fe49b8f8a0c740,PodSandboxId:d73347b679742b50fbf1991cb3fe1b8a3b71f477a59e642795b7a8b2e1422ffb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205008292607334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61277b1be79cee82ca0bd824195bb6a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f5437cf9e02c46b6e4e6ada3614c0c0d04dc8dbadff980ccddae44b17db4fd,PodSandboxId:d78f59cb48463b20d00e1bf49d23c54f1be318141a2b1a284d1960c4b95ae0fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205008277277145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8101483dcdd60205f8c929b06193072,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6033ad13b35a96ae8bf893b867c548e7f522537c168b72f2bad54d0bb6d93fb2,PodSandboxId:f84792a0ed5e641484ef735fa58b3ef2246033ea10bfb3342cd3cd8289722d5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205008281942729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc7686875a03d2e4e74acbb72c14b0d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac36840,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8839006729233a5d1b8ebe2f69d9f26db459df3c0a677c9af2b3fd2fb453d89,PodSandboxId:33e9b18db63e5fbb5b7542bb0aa3e805cd9c029995adf0a4b95767f1ea526471,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716204998160122971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31250cf8-fdb8-4df6-bd46-5ba2298421c2,},Annotations:map[string]string{io.kubernetes.container.hash: df36b11b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650dc968a6942f1077c646df6088c6fda678e029d2f4d359a6f9f4dbd0abd96f,PodSandboxId:06b870e0b6f2fa51b4345068b96a7b2ea7f40762e8aaea660475b70befdc5b64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716204996144955919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdk8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de31fbca-6c00-409c-a231-2b6111ba7505,},Annotations:map[string]string{io.kubernetes.container.hash: d228ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41199e5e56c1a417ab05ce7dbd96e44f23d229c54f0de23270a3866b282a60cb,PodSandboxId:d73347b679742b50fbf1991cb3fe1b8a3b71f477a59e642795b7a8b2e1422ffb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716204992517715353,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61277b1be79cee82ca0bd824195bb6a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edabd0ea698bd9a316df81e484f6d171b1288028aad6c0bef7f506bbfa2922b3,PodSandboxId:bc2fdef3e001d5e1f856a1ffe187efb009d8f03ec57ec7228d41dc43341fff01,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716204992417301098,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tbwm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a1123-c554-4b5e-92c7-850e35340002,},Annotations:map[string]string{io.kubernetes.container.hash: 868d5bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b060a884248a17b961f62a3c026481577301c7a1adacae456bd92d55577b1c,PodSandboxId:10f6c8b9c72501b2adb72bceaf0670708456fd48b9395be26da369dcd2795778,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716204992203498903,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lcrlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d92ee
8-7bcf-41d4-8763-ffda61451835,},Annotations:map[string]string{io.kubernetes.container.hash: b922d0ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca5c59540da9db0c4efa557d0c19ba9f3d099c44ed660404b3e336b8bd7ac87,PodSandboxId:1eda3823bf29b852b58eccae9d6f5be02349f4c9d1f3f28fcbf2e146c50a7abf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17162049904
50906709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14caa6916f14541fee0083cd26e0116b,},Annotations:map[string]string{io.kubernetes.container.hash: 78b95af0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b42898f99958accb840c716991850ad66e348a51b0d00f34aae4edf29e006d1,PodSandboxId:d78f59cb48463b20d00e1bf49d23c54f1be318141a2b1a284d1960c4b95ae0fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716204984628901075,Labels:map[string
]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8101483dcdd60205f8c929b06193072,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02159dda2a4e58d7ea9a968ef3888cf65c95a6fa85096fb9b723bbfe0d63864c,PodSandboxId:f84792a0ed5e641484ef735fa58b3ef2246033ea10bfb3342cd3cd8289722d5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716204984580250106,Labels:map[string]strin
g{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc7686875a03d2e4e74acbb72c14b0d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac36840,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9af43faba0f21599fdd29d10bb2b2c4dd52e201c1b0e9ce292c5bf909c9f62c5,PodSandboxId:0c44cde2cf4ffbda64ecf1fcddbd92cb753a89776e403235f2c82c8037f9505d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716204952539449067,Labels:map[string]string{io.kubernet
es.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lcrlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d92ee8-7bcf-41d4-8763-ffda61451835,},Annotations:map[string]string{io.kubernetes.container.hash: b922d0ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13bd7a280593139ee44caab759fa6148f13830f19eeaca3ba02bf84e3d63eefe,PodSandboxId:ab9cf285a20803e60ab12fdaf453f6665770149765479c6e7d6df1c48667a09e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716204952510882345,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tbwm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a1123-c554-4b5e-92c7-850e35340002,},Annotations:map[string]string{io.kubernetes.container.hash: 868d5bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20c4bafb98885b08fdbc9e9f2fd0ab46ae2831a0f95760c0e24181408f6bc85,PodSandboxId:f30afae4a47c5c2f0ef49f67c18c259e3c0bec80ab819cde3626
1bfcdbdc5efa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716204951745980379,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdk8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de31fbca-6c00-409c-a231-2b6111ba7505,},Annotations:map[string]string{io.kubernetes.container.hash: d228ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d497401ccbcb478f6e72447a825bd8622da7b16c69c054ed4a20415a86a7b2a,PodSandboxId:54ca63588c2c8d912083dd87dea295af6e56667f2c9547f83406ebb5baaa42ad,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716204932336146974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14caa6916f14541fee0083cd26e0116b,},Annotations:map[string]string{io.kubernetes.container.hash: 78b95af0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84112f46-9327-4182-893c-e07870f150b7 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.121865828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b31e30b9-03c1-4e5c-b441-e8518c343a62 name=/runtime.v1.RuntimeService/Version
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.121964121Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b31e30b9-03c1-4e5c-b441-e8518c343a62 name=/runtime.v1.RuntimeService/Version
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.123145106Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa25bd36-709f-4fb0-aed8-cdcaa172d117 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.123603879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716205015123581823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa25bd36-709f-4fb0-aed8-cdcaa172d117 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.124301885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b2cd4a0-5928-49ab-b3ef-9e9aa3345f6d name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.124371125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b2cd4a0-5928-49ab-b3ef-9e9aa3345f6d name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:36:55 kubernetes-upgrade-854547 crio[2390]: time="2024-05-20 11:36:55.124755097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc6985f03c871ff25faa3663996aa309231cc84e5541f6468e3aa89cce628a02,PodSandboxId:33e9b18db63e5fbb5b7542bb0aa3e805cd9c029995adf0a4b95767f1ea526471,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205012099464483,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31250cf8-fdb8-4df6-bd46-5ba2298421c2,},Annotations:map[string]string{io.kubernetes.container.hash: df36b11b,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e2db3feb00e0ca9266e946128a674254a14a5c99d9ae43923fe49b8f8a0c740,PodSandboxId:d73347b679742b50fbf1991cb3fe1b8a3b71f477a59e642795b7a8b2e1422ffb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205008292607334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61277b1be79cee82ca0bd824195bb6a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f5437cf9e02c46b6e4e6ada3614c0c0d04dc8dbadff980ccddae44b17db4fd,PodSandboxId:d78f59cb48463b20d00e1bf49d23c54f1be318141a2b1a284d1960c4b95ae0fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205008277277145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8101483dcdd60205f8c929b06193072,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartC
ount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6033ad13b35a96ae8bf893b867c548e7f522537c168b72f2bad54d0bb6d93fb2,PodSandboxId:f84792a0ed5e641484ef735fa58b3ef2246033ea10bfb3342cd3cd8289722d5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205008281942729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc7686875a03d2e4e74acbb72c14b0d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac36840,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8839006729233a5d1b8ebe2f69d9f26db459df3c0a677c9af2b3fd2fb453d89,PodSandboxId:33e9b18db63e5fbb5b7542bb0aa3e805cd9c029995adf0a4b95767f1ea526471,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716204998160122971,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31250cf8-fdb8-4df6-bd46-5ba2298421c2,},Annotations:map[string]string{io.kubernetes.container.hash: df36b11b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650dc968a6942f1077c646df6088c6fda678e029d2f4d359a6f9f4dbd0abd96f,PodSandboxId:06b870e0b6f2fa51b4345068b96a7b2ea7f40762e8aaea660475b70befdc5b64,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716204996144955919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdk8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de31fbca-6c00-409c-a231-2b6111ba7505,},Annotations:map[string]string{io.kubernetes.container.hash: d228ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41199e5e56c1a417ab05ce7dbd96e44f23d229c54f0de23270a3866b282a60cb,PodSandboxId:d73347b679742b50fbf1991cb3fe1b8a3b71f477a59e642795b7a8b2e1422ffb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716204992517715353,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61277b1be79cee82ca0bd824195bb6a,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edabd0ea698bd9a316df81e484f6d171b1288028aad6c0bef7f506bbfa2922b3,PodSandboxId:bc2fdef3e001d5e1f856a1ffe187efb009d8f03ec57ec7228d41dc43341fff01,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716204992417301098,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tbwm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a1123-c554-4b5e-92c7-850e35340002,},Annotations:map[string]string{io.kubernetes.container.hash: 868d5bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{
\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b060a884248a17b961f62a3c026481577301c7a1adacae456bd92d55577b1c,PodSandboxId:10f6c8b9c72501b2adb72bceaf0670708456fd48b9395be26da369dcd2795778,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716204992203498903,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lcrlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d92ee
8-7bcf-41d4-8763-ffda61451835,},Annotations:map[string]string{io.kubernetes.container.hash: b922d0ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bca5c59540da9db0c4efa557d0c19ba9f3d099c44ed660404b3e336b8bd7ac87,PodSandboxId:1eda3823bf29b852b58eccae9d6f5be02349f4c9d1f3f28fcbf2e146c50a7abf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17162049904
50906709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14caa6916f14541fee0083cd26e0116b,},Annotations:map[string]string{io.kubernetes.container.hash: 78b95af0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b42898f99958accb840c716991850ad66e348a51b0d00f34aae4edf29e006d1,PodSandboxId:d78f59cb48463b20d00e1bf49d23c54f1be318141a2b1a284d1960c4b95ae0fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716204984628901075,Labels:map[string
]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8101483dcdd60205f8c929b06193072,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02159dda2a4e58d7ea9a968ef3888cf65c95a6fa85096fb9b723bbfe0d63864c,PodSandboxId:f84792a0ed5e641484ef735fa58b3ef2246033ea10bfb3342cd3cd8289722d5f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716204984580250106,Labels:map[string]strin
g{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc7686875a03d2e4e74acbb72c14b0d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac36840,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9af43faba0f21599fdd29d10bb2b2c4dd52e201c1b0e9ce292c5bf909c9f62c5,PodSandboxId:0c44cde2cf4ffbda64ecf1fcddbd92cb753a89776e403235f2c82c8037f9505d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716204952539449067,Labels:map[string]string{io.kubernet
es.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lcrlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6d92ee8-7bcf-41d4-8763-ffda61451835,},Annotations:map[string]string{io.kubernetes.container.hash: b922d0ff,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13bd7a280593139ee44caab759fa6148f13830f19eeaca3ba02bf84e3d63eefe,PodSandboxId:ab9cf285a20803e60ab12fdaf453f6665770149765479c6e7d6df1c48667a09e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716204952510882345,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-tbwm4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a1a1123-c554-4b5e-92c7-850e35340002,},Annotations:map[string]string{io.kubernetes.container.hash: 868d5bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20c4bafb98885b08fdbc9e9f2fd0ab46ae2831a0f95760c0e24181408f6bc85,PodSandboxId:f30afae4a47c5c2f0ef49f67c18c259e3c0bec80ab819cde3626
1bfcdbdc5efa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716204951745980379,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdk8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de31fbca-6c00-409c-a231-2b6111ba7505,},Annotations:map[string]string{io.kubernetes.container.hash: d228ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d497401ccbcb478f6e72447a825bd8622da7b16c69c054ed4a20415a86a7b2a,PodSandboxId:54ca63588c2c8d912083dd87dea295af6e56667f2c9547f83406ebb5baaa42ad,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716204932336146974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-854547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14caa6916f14541fee0083cd26e0116b,},Annotations:map[string]string{io.kubernetes.container.hash: 78b95af0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b2cd4a0-5928-49ab-b3ef-9e9aa3345f6d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	dc6985f03c871       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       2                   33e9b18db63e5       storage-provisioner
	9e2db3feb00e0       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   6 seconds ago        Running             kube-controller-manager   2                   d73347b679742       kube-controller-manager-kubernetes-upgrade-854547
	6033ad13b35a9       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   6 seconds ago        Running             kube-apiserver            2                   f84792a0ed5e6       kube-apiserver-kubernetes-upgrade-854547
	91f5437cf9e02       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   6 seconds ago        Running             kube-scheduler            2                   d78f59cb48463       kube-scheduler-kubernetes-upgrade-854547
	c883900672923       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago       Exited              storage-provisioner       1                   33e9b18db63e5       storage-provisioner
	650dc968a6942       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   19 seconds ago       Running             kube-proxy                1                   06b870e0b6f2f       kube-proxy-wdk8j
	41199e5e56c1a       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   22 seconds ago       Exited              kube-controller-manager   1                   d73347b679742       kube-controller-manager-kubernetes-upgrade-854547
	edabd0ea698bd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   22 seconds ago       Running             coredns                   1                   bc2fdef3e001d       coredns-7db6d8ff4d-tbwm4
	14b060a884248       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   23 seconds ago       Running             coredns                   1                   10f6c8b9c7250       coredns-7db6d8ff4d-lcrlw
	bca5c59540da9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago       Running             etcd                      1                   1eda3823bf29b       etcd-kubernetes-upgrade-854547
	7b42898f99958       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   30 seconds ago       Exited              kube-scheduler            1                   d78f59cb48463       kube-scheduler-kubernetes-upgrade-854547
	02159dda2a4e5       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   30 seconds ago       Exited              kube-apiserver            1                   f84792a0ed5e6       kube-apiserver-kubernetes-upgrade-854547
	9af43faba0f21       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   0c44cde2cf4ff       coredns-7db6d8ff4d-lcrlw
	13bd7a2805931       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   ab9cf285a2080       coredns-7db6d8ff4d-tbwm4
	c20c4bafb9888       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   About a minute ago   Exited              kube-proxy                0                   f30afae4a47c5       kube-proxy-wdk8j
	5d497401ccbcb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   About a minute ago   Exited              etcd                      0                   54ca63588c2c8       etcd-kubernetes-upgrade-854547
	
	
	==> coredns [13bd7a280593139ee44caab759fa6148f13830f19eeaca3ba02bf84e3d63eefe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [14b060a884248a17b961f62a3c026481577301c7a1adacae456bd92d55577b1c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9af43faba0f21599fdd29d10bb2b2c4dd52e201c1b0e9ce292c5bf909c9f62c5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [edabd0ea698bd9a316df81e484f6d171b1288028aad6c0bef7f506bbfa2922b3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-854547
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-854547
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:35:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-854547
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:36:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:36:51 +0000   Mon, 20 May 2024 11:35:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:36:51 +0000   Mon, 20 May 2024 11:35:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:36:51 +0000   Mon, 20 May 2024 11:35:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:36:51 +0000   Mon, 20 May 2024 11:35:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    kubernetes-upgrade-854547
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b41433c22ac4b9089b0d30a4a227716
	  System UUID:                5b41433c-22ac-4b90-89b0-d30a4a227716
	  Boot ID:                    3e8ed537-3505-4e17-9a50-c8d6de1828c0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-lcrlw                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     64s
	  kube-system                 coredns-7db6d8ff4d-tbwm4                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     64s
	  kube-system                 etcd-kubernetes-upgrade-854547                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         75s
	  kube-system                 kube-apiserver-kubernetes-upgrade-854547             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-854547    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-wdk8j                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-kubernetes-upgrade-854547             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 63s                kube-proxy       
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)  kubelet          Node kubernetes-upgrade-854547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x7 over 84s)  kubelet          Node kubernetes-upgrade-854547 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)  kubelet          Node kubernetes-upgrade-854547 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           65s                node-controller  Node kubernetes-upgrade-854547 event: Registered Node kubernetes-upgrade-854547 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-854547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-854547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-854547 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.512471] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.128483] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.155264] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.163954] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.287956] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +4.435628] systemd-fstab-generator[730]: Ignoring "noauto" option for root device
	[  +0.058056] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.097102] systemd-fstab-generator[853]: Ignoring "noauto" option for root device
	[ +10.321174] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	[  +0.116236] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.495691] kauditd_printk_skb: 21 callbacks suppressed
	[May20 11:36] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.095235] kauditd_printk_skb: 76 callbacks suppressed
	[  +0.086315] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.188954] systemd-fstab-generator[2219]: Ignoring "noauto" option for root device
	[  +0.147880] systemd-fstab-generator[2231]: Ignoring "noauto" option for root device
	[  +0.317949] systemd-fstab-generator[2259]: Ignoring "noauto" option for root device
	[  +1.051741] systemd-fstab-generator[2514]: Ignoring "noauto" option for root device
	[  +6.185488] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.181472] kauditd_printk_skb: 46 callbacks suppressed
	[ +11.952367] systemd-fstab-generator[3489]: Ignoring "noauto" option for root device
	[  +0.090183] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.627281] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.063315] systemd-fstab-generator[3835]: Ignoring "noauto" option for root device
	
	
	==> etcd [5d497401ccbcb478f6e72447a825bd8622da7b16c69c054ed4a20415a86a7b2a] <==
	{"level":"info","ts":"2024-05-20T11:35:33.719939Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aad771494ea7416a","local-member-attributes":"{Name:kubernetes-upgrade-854547 ClientURLs:[https://192.168.39.87:2379]}","request-path":"/0/members/aad771494ea7416a/attributes","cluster-id":"8794d44e1d88e05d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T11:35:33.720513Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:35:33.720742Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:35:33.720793Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:35:33.720576Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:35:33.720599Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:35:33.723917Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.87:2379"}
	{"level":"info","ts":"2024-05-20T11:35:33.720626Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T11:35:33.724124Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T11:35:33.726539Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-05-20T11:35:56.962054Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"534.796818ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4713737833482710660 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/kubernetes-upgrade-854547\" mod_revision:329 > success:<request_put:<key:\"/registry/minions/kubernetes-upgrade-854547\" value_size:4508 >> failure:<request_range:<key:\"/registry/minions/kubernetes-upgrade-854547\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-05-20T11:35:56.962266Z","caller":"traceutil/trace.go:171","msg":"trace[401502222] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"897.664438ms","start":"2024-05-20T11:35:56.064584Z","end":"2024-05-20T11:35:56.962249Z","steps":["trace[401502222] 'process raft request'  (duration: 897.611649ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T11:35:56.96235Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T11:35:56.06454Z","time spent":"897.778662ms","remote":"127.0.0.1:45922","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":588,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-854547\" mod_revision:279 > success:<request_put:<key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-854547\" value_size:522 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-854547\" > >"}
	{"level":"info","ts":"2024-05-20T11:35:56.962518Z","caller":"traceutil/trace.go:171","msg":"trace[598959046] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"929.271392ms","start":"2024-05-20T11:35:56.033232Z","end":"2024-05-20T11:35:56.962503Z","steps":["trace[598959046] 'process raft request'  (duration: 393.632084ms)","trace[598959046] 'compare'  (duration: 534.320251ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T11:35:56.962599Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T11:35:56.033213Z","time spent":"929.3533ms","remote":"127.0.0.1:45816","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4559,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/kubernetes-upgrade-854547\" mod_revision:329 > success:<request_put:<key:\"/registry/minions/kubernetes-upgrade-854547\" value_size:4508 >> failure:<request_range:<key:\"/registry/minions/kubernetes-upgrade-854547\" > >"}
	{"level":"info","ts":"2024-05-20T11:36:14.910782Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-20T11:36:14.910839Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"kubernetes-upgrade-854547","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"]}
	{"level":"warn","ts":"2024-05-20T11:36:14.910963Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T11:36:14.911082Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T11:36:14.948215Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.87:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-20T11:36:14.948265Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.87:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-20T11:36:14.948313Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aad771494ea7416a","current-leader-member-id":"aad771494ea7416a"}
	{"level":"info","ts":"2024-05-20T11:36:14.951114Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-05-20T11:36:14.951251Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-05-20T11:36:14.951262Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"kubernetes-upgrade-854547","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"]}
	
	
	==> etcd [bca5c59540da9db0c4efa557d0c19ba9f3d099c44ed660404b3e336b8bd7ac87] <==
	{"level":"info","ts":"2024-05-20T11:36:30.594631Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T11:36:30.594659Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-20T11:36:30.594953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a switched to configuration voters=(12310432666106675562)"}
	{"level":"info","ts":"2024-05-20T11:36:30.595082Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","added-peer-id":"aad771494ea7416a","added-peer-peer-urls":["https://192.168.39.87:2380"]}
	{"level":"info","ts":"2024-05-20T11:36:30.595268Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:36:30.595336Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:36:30.598043Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T11:36:30.598366Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aad771494ea7416a","initial-advertise-peer-urls":["https://192.168.39.87:2380"],"listen-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.87:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T11:36:30.598536Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T11:36:30.598657Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-05-20T11:36:30.59871Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-05-20T11:36:32.186318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T11:36:32.186362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T11:36:32.186468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a received MsgPreVoteResp from aad771494ea7416a at term 2"}
	{"level":"info","ts":"2024-05-20T11:36:32.186492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T11:36:32.1865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a received MsgVoteResp from aad771494ea7416a at term 3"}
	{"level":"info","ts":"2024-05-20T11:36:32.186511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became leader at term 3"}
	{"level":"info","ts":"2024-05-20T11:36:32.186521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aad771494ea7416a elected leader aad771494ea7416a at term 3"}
	{"level":"info","ts":"2024-05-20T11:36:32.193738Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aad771494ea7416a","local-member-attributes":"{Name:kubernetes-upgrade-854547 ClientURLs:[https://192.168.39.87:2379]}","request-path":"/0/members/aad771494ea7416a/attributes","cluster-id":"8794d44e1d88e05d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T11:36:32.193771Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:36:32.204045Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.87:2379"}
	{"level":"info","ts":"2024-05-20T11:36:32.204477Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:36:32.207097Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T11:36:32.209502Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T11:36:32.209561Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:36:55 up 1 min,  0 users,  load average: 1.51, 0.56, 0.20
	Linux kubernetes-upgrade-854547 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [02159dda2a4e58d7ea9a968ef3888cf65c95a6fa85096fb9b723bbfe0d63864c] <==
	I0520 11:36:35.566747       1 controller.go:176] quota evaluator worker shutdown
	I0520 11:36:35.566856       1 controller.go:176] quota evaluator worker shutdown
	I0520 11:36:35.566863       1 controller.go:176] quota evaluator worker shutdown
	I0520 11:36:35.566866       1 controller.go:176] quota evaluator worker shutdown
	I0520 11:36:35.566871       1 controller.go:176] quota evaluator worker shutdown
	W0520 11:36:36.217945       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0520 11:36:36.217999       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0520 11:36:37.217685       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0520 11:36:37.218227       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0520 11:36:38.217220       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0520 11:36:38.217807       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0520 11:36:39.217330       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0520 11:36:39.218131       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0520 11:36:40.217921       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0520 11:36:40.217946       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	E0520 11:36:41.217695       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0520 11:36:41.218203       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0520 11:36:42.217969       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0520 11:36:42.218313       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0520 11:36:43.217682       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0520 11:36:43.217845       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0520 11:36:44.217604       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0520 11:36:44.217758       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0520 11:36:45.217918       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0520 11:36:45.218068       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [6033ad13b35a96ae8bf893b867c548e7f522537c168b72f2bad54d0bb6d93fb2] <==
	I0520 11:36:51.097548       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0520 11:36:51.097577       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0520 11:36:51.217036       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0520 11:36:51.217218       1 aggregator.go:165] initial CRD sync complete...
	I0520 11:36:51.217249       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 11:36:51.217256       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 11:36:51.217261       1 cache.go:39] Caches are synced for autoregister controller
	I0520 11:36:51.234666       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 11:36:51.234797       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 11:36:51.234869       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0520 11:36:51.234893       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0520 11:36:51.235051       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 11:36:51.237182       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 11:36:51.240255       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 11:36:51.240663       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0520 11:36:51.249701       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0520 11:36:51.249758       1 policy_source.go:224] refreshing policies
	I0520 11:36:51.267287       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 11:36:52.038912       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 11:36:52.254145       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 11:36:53.026338       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 11:36:53.042186       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 11:36:53.087879       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 11:36:53.199218       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 11:36:53.208887       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [41199e5e56c1a417ab05ce7dbd96e44f23d229c54f0de23270a3866b282a60cb] <==
	I0520 11:36:33.229350       1 serving.go:380] Generated self-signed cert in-memory
	I0520 11:36:33.557912       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0520 11:36:33.557966       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:36:33.559788       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 11:36:33.559966       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0520 11:36:33.560138       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0520 11:36:33.560183       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0520 11:36:45.345353       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.87:8443/healthz\": dial tcp 192.168.39.87:8443: connect: connection refused"
	
	
	==> kube-controller-manager [9e2db3feb00e0ca9266e946128a674254a14a5c99d9ae43923fe49b8f8a0c740] <==
	I0520 11:36:49.317946       1 serving.go:380] Generated self-signed cert in-memory
	I0520 11:36:50.273175       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0520 11:36:50.273215       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:36:50.276539       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0520 11:36:50.276751       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0520 11:36:50.276884       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0520 11:36:50.277048       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 11:36:53.142442       1 controllermanager.go:761] "Started controller" controller="serviceaccount-token-controller"
	I0520 11:36:53.142366       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0520 11:36:53.243657       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-proxy [650dc968a6942f1077c646df6088c6fda678e029d2f4d359a6f9f4dbd0abd96f] <==
	I0520 11:36:36.266871       1 server_linux.go:69] "Using iptables proxy"
	E0520 11:36:36.269471       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-854547\": dial tcp 192.168.39.87:8443: connect: connection refused"
	E0520 11:36:37.398214       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-854547\": dial tcp 192.168.39.87:8443: connect: connection refused"
	E0520 11:36:39.686779       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-854547\": dial tcp 192.168.39.87:8443: connect: connection refused"
	E0520 11:36:44.318110       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-854547\": dial tcp 192.168.39.87:8443: connect: connection refused"
	I0520 11:36:52.997275       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.87"]
	I0520 11:36:53.060740       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:36:53.060808       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:36:53.060825       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:36:53.066889       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:36:53.067898       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:36:53.068053       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:36:53.070666       1 config.go:192] "Starting service config controller"
	I0520 11:36:53.070702       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:36:53.070779       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:36:53.070801       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:36:53.071622       1 config.go:319] "Starting node config controller"
	I0520 11:36:53.071652       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:36:53.171344       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 11:36:53.171451       1 shared_informer.go:320] Caches are synced for service config
	I0520 11:36:53.171905       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c20c4bafb98885b08fdbc9e9f2fd0ab46ae2831a0f95760c0e24181408f6bc85] <==
	I0520 11:35:52.221961       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:35:52.261331       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.87"]
	I0520 11:35:52.386263       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:35:52.386348       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:35:52.386377       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:35:52.389657       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:35:52.389948       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:35:52.390001       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:35:52.393591       1 config.go:192] "Starting service config controller"
	I0520 11:35:52.393659       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:35:52.393739       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:35:52.393746       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:35:52.407908       1 config.go:319] "Starting node config controller"
	I0520 11:35:52.420261       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:35:52.494267       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 11:35:52.494303       1 shared_informer.go:320] Caches are synced for service config
	I0520 11:35:52.520575       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7b42898f99958accb840c716991850ad66e348a51b0d00f34aae4edf29e006d1] <==
	I0520 11:36:25.698793       1 serving.go:380] Generated self-signed cert in-memory
	W0520 11:36:35.271480       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 11:36:35.271573       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 11:36:35.271584       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 11:36:35.271590       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 11:36:35.358258       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 11:36:35.358298       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:36:35.361902       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0520 11:36:35.361971       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0520 11:36:35.362034       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [91f5437cf9e02c46b6e4e6ada3614c0c0d04dc8dbadff980ccddae44b17db4fd] <==
	I0520 11:36:49.733499       1 serving.go:380] Generated self-signed cert in-memory
	W0520 11:36:51.107782       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 11:36:51.107821       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 11:36:51.107832       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 11:36:51.107839       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 11:36:51.150599       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 11:36:51.150636       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:36:51.152078       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 11:36:51.152176       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 11:36:51.152206       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:36:51.152221       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0520 11:36:51.160791       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 11:36:51.160836       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0520 11:36:52.052622       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 11:36:47 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:47.986320    3496 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc7686875a03d2e4e74acbb72c14b0d1-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-854547\" (UID: \"bc7686875a03d2e4e74acbb72c14b0d1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-854547"
	May 20 11:36:47 kubernetes-upgrade-854547 kubelet[3496]: E0520 11:36:47.986617    3496 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-854547?timeout=10s\": dial tcp 192.168.39.87:8443: connect: connection refused" interval="400ms"
	May 20 11:36:48 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:48.083244    3496 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-854547"
	May 20 11:36:48 kubernetes-upgrade-854547 kubelet[3496]: E0520 11:36:48.084252    3496 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.87:8443: connect: connection refused" node="kubernetes-upgrade-854547"
	May 20 11:36:48 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:48.252672    3496 scope.go:117] "RemoveContainer" containerID="02159dda2a4e58d7ea9a968ef3888cf65c95a6fa85096fb9b723bbfe0d63864c"
	May 20 11:36:48 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:48.253124    3496 scope.go:117] "RemoveContainer" containerID="41199e5e56c1a417ab05ce7dbd96e44f23d229c54f0de23270a3866b282a60cb"
	May 20 11:36:48 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:48.254845    3496 scope.go:117] "RemoveContainer" containerID="7b42898f99958accb840c716991850ad66e348a51b0d00f34aae4edf29e006d1"
	May 20 11:36:48 kubernetes-upgrade-854547 kubelet[3496]: E0520 11:36:48.387947    3496 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-854547?timeout=10s\": dial tcp 192.168.39.87:8443: connect: connection refused" interval="800ms"
	May 20 11:36:48 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:48.486003    3496 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-854547"
	May 20 11:36:48 kubernetes-upgrade-854547 kubelet[3496]: E0520 11:36:48.486894    3496 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.87:8443: connect: connection refused" node="kubernetes-upgrade-854547"
	May 20 11:36:49 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:49.289735    3496 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-854547"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.303685    3496 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-854547"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.304096    3496 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-854547"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.305373    3496 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.306515    3496 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.780622    3496 apiserver.go:52] "Watching apiserver"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.785041    3496 topology_manager.go:215] "Topology Admit Handler" podUID="31250cf8-fdb8-4df6-bd46-5ba2298421c2" podNamespace="kube-system" podName="storage-provisioner"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.785515    3496 topology_manager.go:215] "Topology Admit Handler" podUID="de31fbca-6c00-409c-a231-2b6111ba7505" podNamespace="kube-system" podName="kube-proxy-wdk8j"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.785854    3496 topology_manager.go:215] "Topology Admit Handler" podUID="e6d92ee8-7bcf-41d4-8763-ffda61451835" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lcrlw"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.786225    3496 topology_manager.go:215] "Topology Admit Handler" podUID="6a1a1123-c554-4b5e-92c7-850e35340002" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tbwm4"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.882005    3496 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.927833    3496 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de31fbca-6c00-409c-a231-2b6111ba7505-lib-modules\") pod \"kube-proxy-wdk8j\" (UID: \"de31fbca-6c00-409c-a231-2b6111ba7505\") " pod="kube-system/kube-proxy-wdk8j"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.927927    3496 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/31250cf8-fdb8-4df6-bd46-5ba2298421c2-tmp\") pod \"storage-provisioner\" (UID: \"31250cf8-fdb8-4df6-bd46-5ba2298421c2\") " pod="kube-system/storage-provisioner"
	May 20 11:36:51 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:51.927961    3496 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de31fbca-6c00-409c-a231-2b6111ba7505-xtables-lock\") pod \"kube-proxy-wdk8j\" (UID: \"de31fbca-6c00-409c-a231-2b6111ba7505\") " pod="kube-system/kube-proxy-wdk8j"
	May 20 11:36:52 kubernetes-upgrade-854547 kubelet[3496]: I0520 11:36:52.089990    3496 scope.go:117] "RemoveContainer" containerID="c8839006729233a5d1b8ebe2f69d9f26db459df3c0a677c9af2b3fd2fb453d89"
	
	
	==> storage-provisioner [c8839006729233a5d1b8ebe2f69d9f26db459df3c0a677c9af2b3fd2fb453d89] <==
	I0520 11:36:38.261116       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0520 11:36:38.263669       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [dc6985f03c871ff25faa3663996aa309231cc84e5541f6468e3aa89cce628a02] <==
	I0520 11:36:52.233375       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 11:36:52.245161       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 11:36:52.245463       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 11:36:52.266654       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 11:36:52.266755       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"00908976-1ef6-41ed-915b-dd9412e98a57", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-854547_1b20aab8-9c71-4018-8c12-ec81fbc866f2 became leader
	I0520 11:36:52.267286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-854547_1b20aab8-9c71-4018-8c12-ec81fbc866f2!
	I0520 11:36:52.368528       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-854547_1b20aab8-9c71-4018-8c12-ec81fbc866f2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-854547 -n kubernetes-upgrade-854547
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-854547 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-854547" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-854547
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-854547: (1.091736525s)
--- FAIL: TestKubernetesUpgrade (417.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (378.62s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-120159 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-120159 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6m14.332655678s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-120159] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-120159" primary control-plane node in "pause-120159" cluster
	* Updating the running kvm2 "pause-120159" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-120159" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:32:40.874874   49001 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:32:40.875130   49001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:32:40.875140   49001 out.go:304] Setting ErrFile to fd 2...
	I0520 11:32:40.875144   49001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:32:40.875342   49001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:32:40.875872   49001 out.go:298] Setting JSON to false
	I0520 11:32:40.876873   49001 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4504,"bootTime":1716200257,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:32:40.876949   49001 start.go:139] virtualization: kvm guest
	I0520 11:32:40.957461   49001 out.go:177] * [pause-120159] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:32:41.222586   49001 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:32:41.128736   49001 notify.go:220] Checking for updates...
	I0520 11:32:41.339489   49001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:32:41.525696   49001 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:32:41.650893   49001 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:32:41.652396   49001 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:32:41.653730   49001 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:32:41.655255   49001 config.go:182] Loaded profile config "pause-120159": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:32:41.655720   49001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:32:41.655795   49001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:32:41.671548   49001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42817
	I0520 11:32:41.671970   49001 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:32:41.672533   49001 main.go:141] libmachine: Using API Version  1
	I0520 11:32:41.672557   49001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:32:41.672840   49001 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:32:41.673033   49001 main.go:141] libmachine: (pause-120159) Calling .DriverName
	I0520 11:32:41.673322   49001 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:32:41.673631   49001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:32:41.673664   49001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:32:41.688160   49001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39285
	I0520 11:32:41.688566   49001 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:32:41.689037   49001 main.go:141] libmachine: Using API Version  1
	I0520 11:32:41.689066   49001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:32:41.689427   49001 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:32:41.689623   49001 main.go:141] libmachine: (pause-120159) Calling .DriverName
	I0520 11:32:41.843726   49001 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 11:32:41.845066   49001 start.go:297] selected driver: kvm2
	I0520 11:32:41.845088   49001 start.go:901] validating driver "kvm2" against &{Name:pause-120159 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.1 ClusterName:pause-120159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devic
e-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:32:41.845202   49001 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:32:41.845551   49001 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:32:41.845615   49001 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:32:41.864884   49001 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:32:41.865774   49001 cni.go:84] Creating CNI manager for ""
	I0520 11:32:41.865789   49001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:32:41.865838   49001 start.go:340] cluster config:
	{Name:pause-120159 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:pause-120159 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:fa
lse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:32:41.865966   49001 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:32:41.867974   49001 out.go:177] * Starting "pause-120159" primary control-plane node in "pause-120159" cluster
	I0520 11:32:41.869175   49001 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:32:41.869212   49001 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 11:32:41.869222   49001 cache.go:56] Caching tarball of preloaded images
	I0520 11:32:41.869302   49001 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:32:41.869315   49001 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 11:32:41.869450   49001 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/config.json ...
	I0520 11:32:41.869740   49001 start.go:360] acquireMachinesLock for pause-120159: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:32:41.869802   49001 start.go:364] duration metric: took 39.609µs to acquireMachinesLock for "pause-120159"
	I0520 11:32:41.869821   49001 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:32:41.869831   49001 fix.go:54] fixHost starting: 
	I0520 11:32:41.870100   49001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:32:41.870135   49001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:32:41.885135   49001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I0520 11:32:41.885542   49001 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:32:41.885992   49001 main.go:141] libmachine: Using API Version  1
	I0520 11:32:41.886016   49001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:32:41.886322   49001 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:32:41.886468   49001 main.go:141] libmachine: (pause-120159) Calling .DriverName
	I0520 11:32:41.886646   49001 main.go:141] libmachine: (pause-120159) Calling .GetState
	I0520 11:32:41.888304   49001 fix.go:112] recreateIfNeeded on pause-120159: state=Running err=<nil>
	W0520 11:32:41.888337   49001 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:32:41.889657   49001 out.go:177] * Updating the running kvm2 "pause-120159" VM ...
	I0520 11:32:41.890800   49001 machine.go:94] provisionDockerMachine start ...
	I0520 11:32:41.890826   49001 main.go:141] libmachine: (pause-120159) Calling .DriverName
	I0520 11:32:41.891061   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHHostname
	I0520 11:32:41.893135   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:41.893461   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:32:41.893478   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:41.893620   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHPort
	I0520 11:32:41.893790   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:41.893972   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:41.894190   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHUsername
	I0520 11:32:41.894377   49001 main.go:141] libmachine: Using SSH client type: native
	I0520 11:32:41.894556   49001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0520 11:32:41.894567   49001 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:32:42.010688   49001 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-120159
	
	I0520 11:32:42.010738   49001 main.go:141] libmachine: (pause-120159) Calling .GetMachineName
	I0520 11:32:42.011018   49001 buildroot.go:166] provisioning hostname "pause-120159"
	I0520 11:32:42.011046   49001 main.go:141] libmachine: (pause-120159) Calling .GetMachineName
	I0520 11:32:42.011248   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHHostname
	I0520 11:32:42.014256   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:42.014674   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:32:42.014711   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:42.014900   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHPort
	I0520 11:32:42.015117   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:42.015275   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:42.015444   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHUsername
	I0520 11:32:42.015642   49001 main.go:141] libmachine: Using SSH client type: native
	I0520 11:32:42.015813   49001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0520 11:32:42.015829   49001 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-120159 && echo "pause-120159" | sudo tee /etc/hostname
	I0520 11:32:42.149993   49001 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-120159
	
	I0520 11:32:42.150023   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHHostname
	I0520 11:32:42.153310   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:42.153755   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:32:42.153787   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:42.154058   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHPort
	I0520 11:32:42.154246   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:42.154437   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:42.154577   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHUsername
	I0520 11:32:42.154718   49001 main.go:141] libmachine: Using SSH client type: native
	I0520 11:32:42.154907   49001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0520 11:32:42.154972   49001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-120159' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-120159/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-120159' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:32:42.262243   49001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:32:42.262275   49001 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:32:42.262311   49001 buildroot.go:174] setting up certificates
	I0520 11:32:42.262332   49001 provision.go:84] configureAuth start
	I0520 11:32:42.262344   49001 main.go:141] libmachine: (pause-120159) Calling .GetMachineName
	I0520 11:32:42.262626   49001 main.go:141] libmachine: (pause-120159) Calling .GetIP
	I0520 11:32:42.265645   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:42.266067   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:32:42.266106   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:42.266200   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHHostname
	I0520 11:32:42.268661   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:42.269131   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:32:42.269158   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:42.269320   49001 provision.go:143] copyHostCerts
	I0520 11:32:42.269389   49001 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:32:42.269410   49001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:32:42.269471   49001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:32:42.269575   49001 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:32:42.269585   49001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:32:42.269614   49001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:32:42.269681   49001 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:32:42.269692   49001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:32:42.269718   49001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:32:42.269791   49001 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.pause-120159 san=[127.0.0.1 192.168.50.7 localhost minikube pause-120159]
	I0520 11:32:42.806628   49001 provision.go:177] copyRemoteCerts
	I0520 11:32:42.806690   49001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:32:42.806712   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHHostname
	I0520 11:32:42.809879   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:42.810230   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:32:42.810261   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:42.810450   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHPort
	I0520 11:32:42.811235   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:42.811427   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHUsername
	I0520 11:32:42.811607   49001 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/pause-120159/id_rsa Username:docker}
	I0520 11:32:42.896082   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:32:42.926021   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 11:32:42.955317   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:32:42.981181   49001 provision.go:87] duration metric: took 718.835385ms to configureAuth
	I0520 11:32:42.981203   49001 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:32:42.981393   49001 config.go:182] Loaded profile config "pause-120159": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:32:42.981456   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHHostname
	I0520 11:32:42.984130   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:42.984504   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:32:42.984531   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:42.984710   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHPort
	I0520 11:32:42.984912   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:42.985105   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:42.985243   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHUsername
	I0520 11:32:42.985412   49001 main.go:141] libmachine: Using SSH client type: native
	I0520 11:32:42.985637   49001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0520 11:32:42.985660   49001 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:32:48.608206   49001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:32:48.608240   49001 machine.go:97] duration metric: took 6.717420022s to provisionDockerMachine
	I0520 11:32:48.608253   49001 start.go:293] postStartSetup for "pause-120159" (driver="kvm2")
	I0520 11:32:48.608267   49001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:32:48.608288   49001 main.go:141] libmachine: (pause-120159) Calling .DriverName
	I0520 11:32:48.608713   49001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:32:48.608745   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHHostname
	I0520 11:32:48.611681   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:48.612088   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:32:48.612121   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:48.612375   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHPort
	I0520 11:32:48.612547   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:48.612699   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHUsername
	I0520 11:32:48.612820   49001 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/pause-120159/id_rsa Username:docker}
	I0520 11:32:48.695670   49001 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:32:48.700413   49001 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:32:48.700439   49001 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:32:48.700506   49001 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:32:48.700611   49001 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:32:48.700709   49001 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:32:48.710063   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:32:48.734309   49001 start.go:296] duration metric: took 126.04172ms for postStartSetup
	I0520 11:32:48.734352   49001 fix.go:56] duration metric: took 6.864521335s for fixHost
	I0520 11:32:48.734377   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHHostname
	I0520 11:32:48.737258   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:48.737669   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:32:48.737697   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:48.737784   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHPort
	I0520 11:32:48.737984   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:48.738149   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:48.738352   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHUsername
	I0520 11:32:48.738544   49001 main.go:141] libmachine: Using SSH client type: native
	I0520 11:32:48.738762   49001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0520 11:32:48.738776   49001 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 11:32:48.849906   49001 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716204768.829567962
	
	I0520 11:32:48.849932   49001 fix.go:216] guest clock: 1716204768.829567962
	I0520 11:32:48.849940   49001 fix.go:229] Guest: 2024-05-20 11:32:48.829567962 +0000 UTC Remote: 2024-05-20 11:32:48.734358476 +0000 UTC m=+7.892845746 (delta=95.209486ms)
	I0520 11:32:48.849986   49001 fix.go:200] guest clock delta is within tolerance: 95.209486ms
	I0520 11:32:48.849993   49001 start.go:83] releasing machines lock for "pause-120159", held for 6.980178692s
	I0520 11:32:48.850021   49001 main.go:141] libmachine: (pause-120159) Calling .DriverName
	I0520 11:32:48.850291   49001 main.go:141] libmachine: (pause-120159) Calling .GetIP
	I0520 11:32:48.853301   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:48.853732   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:32:48.853762   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:48.853914   49001 main.go:141] libmachine: (pause-120159) Calling .DriverName
	I0520 11:32:48.854540   49001 main.go:141] libmachine: (pause-120159) Calling .DriverName
	I0520 11:32:48.854712   49001 main.go:141] libmachine: (pause-120159) Calling .DriverName
	I0520 11:32:48.854810   49001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:32:48.854859   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHHostname
	I0520 11:32:48.854928   49001 ssh_runner.go:195] Run: cat /version.json
	I0520 11:32:48.854967   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHHostname
	I0520 11:32:48.857759   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:48.858092   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:48.858208   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:32:48.858242   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:48.858381   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHPort
	I0520 11:32:48.858521   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:32:48.858541   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:32:48.858579   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:48.858723   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHPort
	I0520 11:32:48.858737   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHUsername
	I0520 11:32:48.858870   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHKeyPath
	I0520 11:32:48.858920   49001 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/pause-120159/id_rsa Username:docker}
	I0520 11:32:48.859051   49001 main.go:141] libmachine: (pause-120159) Calling .GetSSHUsername
	I0520 11:32:48.859197   49001 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/pause-120159/id_rsa Username:docker}
	W0520 11:32:48.957114   49001 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:32:48.957215   49001 ssh_runner.go:195] Run: systemctl --version
	I0520 11:32:48.964772   49001 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:32:49.128643   49001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:32:49.159807   49001 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:32:49.159888   49001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:32:49.181203   49001 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0520 11:32:49.181231   49001 start.go:494] detecting cgroup driver to use...
	I0520 11:32:49.181308   49001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:32:49.243477   49001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:32:49.312363   49001 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:32:49.312434   49001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:32:49.429297   49001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:32:49.481822   49001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:32:49.685014   49001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:32:49.944076   49001 docker.go:233] disabling docker service ...
	I0520 11:32:49.944155   49001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:32:49.983763   49001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:32:50.090414   49001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:32:50.388351   49001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:32:50.608274   49001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:32:50.647861   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:32:50.683418   49001 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:32:50.683492   49001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:32:50.699115   49001 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:32:50.699167   49001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:32:50.713710   49001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:32:50.727327   49001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:32:50.741977   49001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:32:50.756590   49001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:32:50.770383   49001 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:32:50.782777   49001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:32:50.800105   49001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:32:50.811587   49001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:32:50.825702   49001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:32:51.029180   49001 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:34:21.546427   49001 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.517211164s)
	I0520 11:34:21.546454   49001 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:34:21.546496   49001 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:34:21.553208   49001 start.go:562] Will wait 60s for crictl version
	I0520 11:34:21.553258   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:34:21.558624   49001 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:34:21.612982   49001 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:34:21.613053   49001 ssh_runner.go:195] Run: crio --version
	I0520 11:34:21.643094   49001 ssh_runner.go:195] Run: crio --version
	I0520 11:34:21.680984   49001 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:34:21.682480   49001 main.go:141] libmachine: (pause-120159) Calling .GetIP
	I0520 11:34:21.685351   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:34:21.685727   49001 main.go:141] libmachine: (pause-120159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:61:53", ip: ""} in network mk-pause-120159: {Iface:virbr2 ExpiryTime:2024-05-20 12:31:55 +0000 UTC Type:0 Mac:52:54:00:45:61:53 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-120159 Clientid:01:52:54:00:45:61:53}
	I0520 11:34:21.685755   49001 main.go:141] libmachine: (pause-120159) DBG | domain pause-120159 has defined IP address 192.168.50.7 and MAC address 52:54:00:45:61:53 in network mk-pause-120159
	I0520 11:34:21.685957   49001 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0520 11:34:21.691083   49001 kubeadm.go:877] updating cluster {Name:pause-120159 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:pause-120159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:34:21.691247   49001 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:34:21.691312   49001 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:34:21.749575   49001 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:34:21.749596   49001 crio.go:433] Images already preloaded, skipping extraction
	I0520 11:34:21.749658   49001 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:34:21.789548   49001 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:34:21.789577   49001 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:34:21.789587   49001 kubeadm.go:928] updating node { 192.168.50.7 8443 v1.30.1 crio true true} ...
	I0520 11:34:21.789712   49001 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-120159 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:pause-120159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:34:21.789799   49001 ssh_runner.go:195] Run: crio config
	I0520 11:34:21.842799   49001 cni.go:84] Creating CNI manager for ""
	I0520 11:34:21.842825   49001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:34:21.842841   49001 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:34:21.842865   49001 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-120159 NodeName:pause-120159 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:34:21.843024   49001 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-120159"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:34:21.843093   49001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:34:21.858653   49001 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:34:21.858722   49001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:34:21.872118   49001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0520 11:34:21.891799   49001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:34:21.913984   49001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0520 11:34:21.932175   49001 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0520 11:34:21.936404   49001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:34:22.100539   49001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:34:22.116756   49001 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159 for IP: 192.168.50.7
	I0520 11:34:22.116779   49001 certs.go:194] generating shared ca certs ...
	I0520 11:34:22.116796   49001 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:34:22.116983   49001 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:34:22.117043   49001 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:34:22.117056   49001 certs.go:256] generating profile certs ...
	I0520 11:34:22.117129   49001 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/client.key
	I0520 11:34:22.117186   49001 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/apiserver.key.962261d5
	I0520 11:34:22.117221   49001 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/proxy-client.key
	I0520 11:34:22.117317   49001 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:34:22.117343   49001 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:34:22.117349   49001 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:34:22.117371   49001 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:34:22.117392   49001 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:34:22.117413   49001 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:34:22.117457   49001 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:34:22.118050   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:34:22.148753   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:34:22.175351   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:34:22.202466   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:34:22.229483   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 11:34:22.259129   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 11:34:22.284556   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:34:22.311467   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:34:22.338406   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:34:22.364526   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:34:22.398536   49001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:34:22.431931   49001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:34:22.455138   49001 ssh_runner.go:195] Run: openssl version
	I0520 11:34:22.462202   49001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:34:22.474750   49001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:34:22.480710   49001 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:34:22.480767   49001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:34:22.487097   49001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:34:22.497540   49001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:34:22.515132   49001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:34:22.520523   49001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:34:22.520592   49001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:34:22.526689   49001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:34:22.537380   49001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:34:22.550202   49001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:34:22.555165   49001 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:34:22.555229   49001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:34:22.561281   49001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:34:22.571505   49001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:34:22.577319   49001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:34:22.584871   49001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:34:22.590607   49001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:34:22.596700   49001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:34:22.602798   49001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:34:22.608410   49001 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:34:22.615337   49001 kubeadm.go:391] StartCluster: {Name:pause-120159 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:pause-120159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:34:22.615508   49001 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:34:22.615580   49001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:34:22.660146   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:34:22.660172   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:34:22.660178   49001 cri.go:89] found id: "beeb4c481be444db14a75f4211988ac76f21911a8b62e15b4821c0f0d175eff9"
	I0520 11:34:22.660183   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:34:22.660187   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:34:22.660191   49001 cri.go:89] found id: "efe2d72379bf30d78f82cd78c19ee2c22fa6970eceb612640172ca1213871e85"
	I0520 11:34:22.660195   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:34:22.660200   49001 cri.go:89] found id: "4290aa8a336c1924ec2f54e106e985ef9b7d13645a00db074a1e56ce17f415c7"
	I0520 11:34:22.660204   49001 cri.go:89] found id: "e89e5b0ec0db2323291b06d5a387c785e7c3a86187dc191b7ca244df2a0e941a"
	I0520 11:34:22.660212   49001 cri.go:89] found id: "b3a423ad56de6c0781c6e16c7aaf9a560c2a582b0e50c477e4a6f49c683a5340"
	I0520 11:34:22.660217   49001 cri.go:89] found id: "5aa434abd88179e9f10790d26b292104f990d7eb654e47804bcc31e9642efcd5"
	I0520 11:34:22.660222   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:34:22.660230   49001 cri.go:89] found id: ""
	I0520 11:34:22.660272   49001 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-120159 -n pause-120159
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-120159 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-120159 logs -n 25: (1.259938141s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-388098                                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:33 UTC | 20 May 24 11:33 UTC |
	| start   | -p NoKubernetes-388098                                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:33 UTC | 20 May 24 11:34 UTC |
	|         | --no-kubernetes --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-735967                           | force-systemd-env-735967  | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:34 UTC |
	| start   | -p force-systemd-flag-199580                          | force-systemd-flag-199580 | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:35 UTC |
	|         | --memory=2048 --force-systemd                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-388098 sudo                           | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:34 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-388098                                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:34 UTC |
	| start   | -p NoKubernetes-388098                                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:35 UTC |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-854547                          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:34 UTC |
	| start   | -p kubernetes-upgrade-854547                          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:35 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-199580 ssh cat                     | force-systemd-flag-199580 | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:35 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-199580                          | force-systemd-flag-199580 | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:35 UTC |
	| start   | -p cert-expiration-355991                             | cert-expiration-355991    | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:36 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-388098 sudo                           | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:35 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-388098                                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:35 UTC |
	| start   | -p cert-options-477107                                | cert-options-477107       | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:36 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-854547                          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:35 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-854547                          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:36 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | cert-options-477107 ssh                               | cert-options-477107       | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-477107 -- sudo                        | cert-options-477107       | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-477107                                | cert-options-477107       | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	| start   | -p old-k8s-version-210420                             | old-k8s-version-210420    | jenkins | v1.33.1 | 20 May 24 11:36 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-854547                          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	| start   | -p no-preload-814599                                  | no-preload-814599         | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:38 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                          |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-814599            | no-preload-814599         | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-814599                                  | no-preload-814599         | jenkins | v1.33.1 | 20 May 24 11:38 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:36:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:36:57.740218   55177 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:36:57.740452   55177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:36:57.740460   55177 out.go:304] Setting ErrFile to fd 2...
	I0520 11:36:57.740464   55177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:36:57.740638   55177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:36:57.741232   55177 out.go:298] Setting JSON to false
	I0520 11:36:57.742142   55177 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4761,"bootTime":1716200257,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:36:57.742198   55177 start.go:139] virtualization: kvm guest
	I0520 11:36:57.744450   55177 out.go:177] * [no-preload-814599] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:36:57.745984   55177 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:36:57.746944   55177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:36:57.747962   55177 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:36:57.745964   55177 notify.go:220] Checking for updates...
	I0520 11:36:57.749984   55177 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:36:57.751225   55177 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:36:57.752394   55177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:36:57.753903   55177 config.go:182] Loaded profile config "cert-expiration-355991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:36:57.753997   55177 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:36:57.754099   55177 config.go:182] Loaded profile config "pause-120159": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:36:57.754163   55177 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:36:57.788611   55177 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 11:36:57.789885   55177 start.go:297] selected driver: kvm2
	I0520 11:36:57.789901   55177 start.go:901] validating driver "kvm2" against <nil>
	I0520 11:36:57.789911   55177 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:36:57.790632   55177 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.790715   55177 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:36:57.805273   55177 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:36:57.805308   55177 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 11:36:57.805578   55177 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:36:57.805646   55177 cni.go:84] Creating CNI manager for ""
	I0520 11:36:57.805659   55177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:36:57.805669   55177 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 11:36:57.805712   55177 start.go:340] cluster config:
	{Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:36:57.805826   55177 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.807505   55177 out.go:177] * Starting "no-preload-814599" primary control-plane node in "no-preload-814599" cluster
	I0520 11:36:58.463061   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.463494   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has current primary IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.463530   54732 main.go:141] libmachine: (old-k8s-version-210420) Found IP for machine: 192.168.83.100
	I0520 11:36:58.463539   54732 main.go:141] libmachine: (old-k8s-version-210420) Reserving static IP address...
	I0520 11:36:58.463857   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"} in network mk-old-k8s-version-210420
	I0520 11:36:58.538368   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Getting to WaitForSSH function...
	I0520 11:36:58.538404   54732 main.go:141] libmachine: (old-k8s-version-210420) Reserved static IP address: 192.168.83.100
	I0520 11:36:58.538417   54732 main.go:141] libmachine: (old-k8s-version-210420) Waiting for SSH to be available...
	I0520 11:36:58.540887   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.541361   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:58.541394   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.541531   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH client type: external
	I0520 11:36:58.541561   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa (-rw-------)
	I0520 11:36:58.541590   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:36:58.541614   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | About to run SSH command:
	I0520 11:36:58.541632   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | exit 0
	I0520 11:36:58.669046   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | SSH cmd err, output: <nil>: 
	I0520 11:36:58.669304   54732 main.go:141] libmachine: (old-k8s-version-210420) KVM machine creation complete!
	I0520 11:36:58.669649   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:36:58.670148   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:36:58.670359   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:36:58.670540   54732 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 11:36:58.670559   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetState
	I0520 11:36:58.671802   54732 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 11:36:58.671815   54732 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 11:36:58.671820   54732 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 11:36:58.671826   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:58.673985   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.674327   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:58.674357   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.674419   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:58.674570   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.674717   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.674863   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:58.675080   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:58.675315   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:58.675331   54732 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 11:36:58.788510   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:36:58.788531   54732 main.go:141] libmachine: Detecting the provisioner...
	I0520 11:36:58.788538   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:58.791777   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.792093   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:58.792120   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.792298   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:58.792515   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.792694   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.792833   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:58.792980   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:58.793159   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:58.793169   54732 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 11:36:58.905732   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 11:36:58.905795   54732 main.go:141] libmachine: found compatible host: buildroot
	I0520 11:36:58.905808   54732 main.go:141] libmachine: Provisioning with buildroot...
	I0520 11:36:58.905817   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:36:58.906037   54732 buildroot.go:166] provisioning hostname "old-k8s-version-210420"
	I0520 11:36:58.906069   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:36:58.906235   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:58.908812   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.909252   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:58.909279   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.909457   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:58.909640   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.909803   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.909963   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:58.910137   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:58.910360   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:58.910382   54732 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210420 && echo "old-k8s-version-210420" | sudo tee /etc/hostname
	I0520 11:36:59.043365   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210420
	
	I0520 11:36:59.043395   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:59.046494   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.047005   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.047029   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.047181   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:59.047365   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.047541   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.047678   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:59.047866   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:59.048197   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:59.048223   54732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:36:59.175659   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:36:59.175698   54732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:36:59.175720   54732 buildroot.go:174] setting up certificates
	I0520 11:36:59.175734   54732 provision.go:84] configureAuth start
	I0520 11:36:59.175750   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:36:59.176024   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:36:59.179114   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.179481   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.179513   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.179636   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:59.182365   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.182693   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.182728   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.182892   54732 provision.go:143] copyHostCerts
	I0520 11:36:59.182948   54732 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:36:59.182963   54732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:36:59.183017   54732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:36:59.183120   54732 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:36:59.183130   54732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:36:59.183162   54732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:36:59.183225   54732 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:36:59.183232   54732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:36:59.183252   54732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:36:59.183325   54732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210420 san=[127.0.0.1 192.168.83.100 localhost minikube old-k8s-version-210420]
	I0520 11:36:59.516574   54732 provision.go:177] copyRemoteCerts
	I0520 11:36:59.516648   54732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:36:59.516694   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:59.520890   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.521316   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.521452   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.521534   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:59.521799   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.521994   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:59.522137   54732 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:36:59.615162   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:36:59.639370   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 11:36:59.669948   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:36:59.697465   54732 provision.go:87] duration metric: took 521.714661ms to configureAuth
	I0520 11:36:59.697495   54732 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:36:59.697690   54732 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:36:59.697772   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:59.700598   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.700997   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.701028   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.701242   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:59.701414   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.701561   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.701663   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:59.701879   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:59.702086   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:59.702102   54732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:36:57.808628   55177 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:36:57.808751   55177 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json ...
	I0520 11:36:57.808782   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json: {Name:mkec34da3f05f1bf45cb4aef1e29a182c314b63c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:36:57.808855   55177 cache.go:107] acquiring lock: {Name:mk46c7017dd2c3cce534f4173742b327488567e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.808895   55177 cache.go:107] acquiring lock: {Name:mk11b285bf8b40bdd567397b87b729e6c613cbbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.808947   55177 cache.go:107] acquiring lock: {Name:mkf1a1c7b346d55bc111333ed6814f4739fb8a8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.808972   55177 start.go:360] acquireMachinesLock for no-preload-814599: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:36:57.809002   55177 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:36:57.809019   55177 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:36:57.809052   55177 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 11:36:57.809009   55177 cache.go:107] acquiring lock: {Name:mkfd235693f296d62787eba91a01a7cbf9d71206 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.809024   55177 cache.go:107] acquiring lock: {Name:mkf72f42066e10532cd7a6f248d435645a1b3916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.808868   55177 cache.go:107] acquiring lock: {Name:mkee7b66a56ccefc27f3c797a42e758b7405686e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.809024   55177 cache.go:107] acquiring lock: {Name:mk0994b0f95001ba6a6a39dda38f18a1ab80ab78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.809191   55177 cache.go:115] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 11:36:57.809210   55177 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:36:57.809244   55177 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:36:57.809263   55177 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:36:57.809206   55177 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 341.402µs
	I0520 11:36:57.809354   55177 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 11:36:57.809120   55177 cache.go:107] acquiring lock: {Name:mk09942a59dde7576e41568bb51f124628a11712 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.809598   55177 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:36:57.810488   55177 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:36:57.810643   55177 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:36:57.810770   55177 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:36:57.811202   55177 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 11:36:57.811227   55177 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:36:57.811565   55177 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:36:57.811603   55177 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:36:57.985168   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 11:36:58.046815   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 11:36:58.152865   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 11:36:58.155397   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0520 11:36:58.155440   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 11:36:58.157384   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0520 11:36:58.175790   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 11:36:58.231723   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0520 11:36:58.231748   55177 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 422.825832ms
	I0520 11:36:58.231758   55177 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0520 11:36:58.278313   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0520 11:36:58.278340   55177 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1" took 469.363676ms
	I0520 11:36:58.278355   55177 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0520 11:36:59.049503   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0520 11:36:59.049528   55177 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.24056843s
	I0520 11:36:59.049540   55177 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0520 11:36:59.464515   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0520 11:36:59.464572   55177 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1" took 1.655502108s
	I0520 11:36:59.464587   55177 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0520 11:36:59.530024   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0520 11:36:59.530053   55177 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1" took 1.721210921s
	I0520 11:36:59.530067   55177 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0520 11:36:59.541542   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0520 11:36:59.541565   55177 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1" took 1.73259042s
	I0520 11:36:59.541575   55177 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0520 11:36:59.579247   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 exists
	I0520 11:36:59.579272   55177 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0" took 1.77037797s
	I0520 11:36:59.579282   55177 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0520 11:36:59.579296   55177 cache.go:87] Successfully saved all images to host disk.
	I0520 11:37:00.294102   55177 start.go:364] duration metric: took 2.485109879s to acquireMachinesLock for "no-preload-814599"
	I0520 11:37:00.294157   55177 start.go:93] Provisioning new machine with config: &{Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:37:00.294274   55177 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 11:36:55.893603   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:55.893672   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:36:55.893688   49001 cri.go:89] found id: ""
	I0520 11:36:55.893697   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:36:55.893760   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:55.898164   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:55.902606   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:36:55.902675   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:36:55.955942   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:55.955970   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:55.955976   49001 cri.go:89] found id: ""
	I0520 11:36:55.955984   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:36:55.956040   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:55.961419   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:55.965977   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:36:55.966041   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:36:56.007439   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:56.007467   49001 cri.go:89] found id: ""
	I0520 11:36:56.007477   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:36:56.007535   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:56.013294   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:36:56.013360   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:36:56.051025   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:36:56.051054   49001 cri.go:89] found id: ""
	I0520 11:36:56.051064   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:36:56.051119   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:56.055384   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:36:56.055453   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:36:56.093343   49001 cri.go:89] found id: ""
	I0520 11:36:56.093368   49001 logs.go:276] 0 containers: []
	W0520 11:36:56.093379   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:36:56.093397   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:36:56.093412   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:36:56.175079   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:36:56.175107   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:36:56.175122   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:56.253081   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:36:56.253110   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:56.299222   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:36:56.299254   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:36:56.340853   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:36:56.340882   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:36:56.359408   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:36:56.359439   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:56.406249   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:36:56.406276   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:56.444515   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:36:56.444545   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:56.486746   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:36:56.486783   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:36:56.841246   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:36:56.841277   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:36:56.889777   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:36:56.889802   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:56.931813   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:36:56.931847   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:57.006897   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:36:57.006924   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:36:57.104536   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:36:57.104570   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:36:57.139795   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:36:57Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:36:57Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:36:59.640325   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:36:59.658995   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:36:59.659063   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:36:59.700016   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:59.700040   49001 cri.go:89] found id: ""
	I0520 11:36:59.700048   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:36:59.700097   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.705235   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:36:59.705303   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:36:59.750870   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:59.750902   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:59.750908   49001 cri.go:89] found id: ""
	I0520 11:36:59.750916   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:36:59.750970   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.755675   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.759495   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:36:59.759555   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:36:59.795202   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:59.795227   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:36:59.795232   49001 cri.go:89] found id: ""
	I0520 11:36:59.795239   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:36:59.795296   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.799284   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.803686   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:36:59.803737   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:36:59.847686   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:59.847715   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:59.847721   49001 cri.go:89] found id: ""
	I0520 11:36:59.847731   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:36:59.847786   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.852253   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.856208   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:36:59.856261   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:36:59.894425   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:59.894449   49001 cri.go:89] found id: ""
	I0520 11:36:59.894456   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:36:59.894506   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.899068   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:36:59.899137   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:36:59.937477   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:36:59.937504   49001 cri.go:89] found id: ""
	I0520 11:36:59.937512   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:36:59.937567   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.941932   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:36:59.942004   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:36:59.982641   49001 cri.go:89] found id: ""
	I0520 11:36:59.982672   49001 logs.go:276] 0 containers: []
	W0520 11:36:59.982682   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:36:59.982701   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:36:59.982716   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:00.026137   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:00.026179   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:00.076692   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:00.076720   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:00.120618   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:00.120657   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:00.136104   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:00.136126   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:00.212381   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:00.212411   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:00.288774   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:00.288808   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:00.621731   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:00.621763   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:00.718258   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:00.718295   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:00.761352   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:00.761378   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:00.837420   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:00.837449   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:00.837464   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:00.874929   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:00.874957   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:00.030830   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:37:00.030863   54732 main.go:141] libmachine: Checking connection to Docker...
	I0520 11:37:00.030875   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetURL
	I0520 11:37:00.032252   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using libvirt version 6000000
	I0520 11:37:00.034407   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.034740   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.034776   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.034998   54732 main.go:141] libmachine: Docker is up and running!
	I0520 11:37:00.035022   54732 main.go:141] libmachine: Reticulating splines...
	I0520 11:37:00.035028   54732 client.go:171] duration metric: took 25.015121735s to LocalClient.Create
	I0520 11:37:00.035057   54732 start.go:167] duration metric: took 25.015189379s to libmachine.API.Create "old-k8s-version-210420"
	I0520 11:37:00.035076   54732 start.go:293] postStartSetup for "old-k8s-version-210420" (driver="kvm2")
	I0520 11:37:00.035090   54732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:37:00.035116   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.035406   54732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:37:00.035442   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:37:00.037625   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.037965   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.037999   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.038068   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:37:00.038248   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.038449   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:37:00.038628   54732 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:37:00.128149   54732 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:37:00.132646   54732 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:37:00.132670   54732 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:37:00.132739   54732 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:37:00.132838   54732 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:37:00.132998   54732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:37:00.142874   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:37:00.169803   54732 start.go:296] duration metric: took 134.712167ms for postStartSetup
	I0520 11:37:00.169866   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:37:00.170434   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:37:00.173263   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.173608   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.173633   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.173934   54732 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:37:00.174155   54732 start.go:128] duration metric: took 25.173945173s to createHost
	I0520 11:37:00.174185   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:37:00.176961   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.177375   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.177397   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.177708   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:37:00.177888   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.178074   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.178247   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:37:00.178432   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:00.178642   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:37:00.178659   54732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:37:00.293928   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205020.272115335
	
	I0520 11:37:00.293949   54732 fix.go:216] guest clock: 1716205020.272115335
	I0520 11:37:00.293959   54732 fix.go:229] Guest: 2024-05-20 11:37:00.272115335 +0000 UTC Remote: 2024-05-20 11:37:00.174170865 +0000 UTC m=+25.282981176 (delta=97.94447ms)
	I0520 11:37:00.294015   54732 fix.go:200] guest clock delta is within tolerance: 97.94447ms
	I0520 11:37:00.294023   54732 start.go:83] releasing machines lock for "old-k8s-version-210420", held for 25.293893571s
	I0520 11:37:00.294050   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.294323   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:37:00.297134   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.297565   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.297596   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.297885   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.298445   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.298625   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.298696   54732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:37:00.298744   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:37:00.298828   54732 ssh_runner.go:195] Run: cat /version.json
	I0520 11:37:00.298860   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:37:00.301978   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.302405   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.302462   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.302490   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.302601   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:37:00.302743   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.302885   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:37:00.302980   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.303000   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.303029   54732 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:37:00.303310   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:37:00.303464   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.303598   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:37:00.303715   54732 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	W0520 11:37:00.415297   54732 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:37:00.415384   54732 ssh_runner.go:195] Run: systemctl --version
	I0520 11:37:00.422441   54732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:37:00.586815   54732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:37:00.594067   54732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:37:00.594148   54732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:37:00.610884   54732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:37:00.610907   54732 start.go:494] detecting cgroup driver to use...
	I0520 11:37:00.610976   54732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:37:00.628469   54732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:37:00.644580   54732 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:37:00.644642   54732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:37:00.659819   54732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:37:00.673915   54732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:37:00.804460   54732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:37:00.986774   54732 docker.go:233] disabling docker service ...
	I0520 11:37:00.986844   54732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:37:01.004845   54732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:37:01.019356   54732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:37:01.148680   54732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:37:01.269540   54732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:37:01.285312   54732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:37:01.304010   54732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 11:37:01.304084   54732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:01.316378   54732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:37:01.316451   54732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:01.327728   54732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:01.338698   54732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:01.349526   54732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:37:01.360959   54732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:37:01.373008   54732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:37:01.373060   54732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:37:01.386782   54732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:37:01.397163   54732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:37:01.518110   54732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:37:01.665579   54732 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:37:01.665652   54732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:37:01.670666   54732 start.go:562] Will wait 60s for crictl version
	I0520 11:37:01.670709   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:01.674485   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:37:01.728836   54732 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:37:01.728982   54732 ssh_runner.go:195] Run: crio --version
	I0520 11:37:01.760458   54732 ssh_runner.go:195] Run: crio --version
	I0520 11:37:01.792553   54732 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 11:37:00.296371   55177 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 11:37:00.296581   55177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:37:00.296642   55177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:37:00.315548   55177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
	I0520 11:37:00.316070   55177 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:37:00.316715   55177 main.go:141] libmachine: Using API Version  1
	I0520 11:37:00.316736   55177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:37:00.317283   55177 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:37:00.317518   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:37:00.317838   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:00.321161   55177 start.go:159] libmachine.API.Create for "no-preload-814599" (driver="kvm2")
	I0520 11:37:00.321194   55177 client.go:168] LocalClient.Create starting
	I0520 11:37:00.321226   55177 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 11:37:00.321268   55177 main.go:141] libmachine: Decoding PEM data...
	I0520 11:37:00.321288   55177 main.go:141] libmachine: Parsing certificate...
	I0520 11:37:00.321359   55177 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 11:37:00.321387   55177 main.go:141] libmachine: Decoding PEM data...
	I0520 11:37:00.321400   55177 main.go:141] libmachine: Parsing certificate...
	I0520 11:37:00.321435   55177 main.go:141] libmachine: Running pre-create checks...
	I0520 11:37:00.321448   55177 main.go:141] libmachine: (no-preload-814599) Calling .PreCreateCheck
	I0520 11:37:00.321854   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetConfigRaw
	I0520 11:37:00.322302   55177 main.go:141] libmachine: Creating machine...
	I0520 11:37:00.322319   55177 main.go:141] libmachine: (no-preload-814599) Calling .Create
	I0520 11:37:00.322474   55177 main.go:141] libmachine: (no-preload-814599) Creating KVM machine...
	I0520 11:37:00.323915   55177 main.go:141] libmachine: (no-preload-814599) DBG | found existing default KVM network
	I0520 11:37:00.325990   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:00.325828   55243 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0e0}
	I0520 11:37:00.326026   55177 main.go:141] libmachine: (no-preload-814599) DBG | created network xml: 
	I0520 11:37:00.326044   55177 main.go:141] libmachine: (no-preload-814599) DBG | <network>
	I0520 11:37:00.326052   55177 main.go:141] libmachine: (no-preload-814599) DBG |   <name>mk-no-preload-814599</name>
	I0520 11:37:00.326068   55177 main.go:141] libmachine: (no-preload-814599) DBG |   <dns enable='no'/>
	I0520 11:37:00.326079   55177 main.go:141] libmachine: (no-preload-814599) DBG |   
	I0520 11:37:00.326090   55177 main.go:141] libmachine: (no-preload-814599) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 11:37:00.326100   55177 main.go:141] libmachine: (no-preload-814599) DBG |     <dhcp>
	I0520 11:37:00.326114   55177 main.go:141] libmachine: (no-preload-814599) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 11:37:00.326126   55177 main.go:141] libmachine: (no-preload-814599) DBG |     </dhcp>
	I0520 11:37:00.326139   55177 main.go:141] libmachine: (no-preload-814599) DBG |   </ip>
	I0520 11:37:00.326149   55177 main.go:141] libmachine: (no-preload-814599) DBG |   
	I0520 11:37:00.326176   55177 main.go:141] libmachine: (no-preload-814599) DBG | </network>
	I0520 11:37:00.326187   55177 main.go:141] libmachine: (no-preload-814599) DBG | 
	I0520 11:37:00.332189   55177 main.go:141] libmachine: (no-preload-814599) DBG | trying to create private KVM network mk-no-preload-814599 192.168.39.0/24...
	I0520 11:37:00.411151   55177 main.go:141] libmachine: (no-preload-814599) DBG | private KVM network mk-no-preload-814599 192.168.39.0/24 created
	I0520 11:37:00.411190   55177 main.go:141] libmachine: (no-preload-814599) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599 ...
	I0520 11:37:00.411206   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:00.411077   55243 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:37:00.411235   55177 main.go:141] libmachine: (no-preload-814599) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 11:37:00.411256   55177 main.go:141] libmachine: (no-preload-814599) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 11:37:00.644043   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:00.643935   55243 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa...
	I0520 11:37:00.877196   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:00.877035   55243 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/no-preload-814599.rawdisk...
	I0520 11:37:00.877234   55177 main.go:141] libmachine: (no-preload-814599) DBG | Writing magic tar header
	I0520 11:37:00.877251   55177 main.go:141] libmachine: (no-preload-814599) DBG | Writing SSH key tar header
	I0520 11:37:00.877264   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:00.877182   55243 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599 ...
	I0520 11:37:00.877325   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599
	I0520 11:37:00.877378   55177 main.go:141] libmachine: (no-preload-814599) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599 (perms=drwx------)
	I0520 11:37:00.877394   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 11:37:00.877407   55177 main.go:141] libmachine: (no-preload-814599) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 11:37:00.877435   55177 main.go:141] libmachine: (no-preload-814599) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 11:37:00.877453   55177 main.go:141] libmachine: (no-preload-814599) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 11:37:00.877467   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:37:00.877494   55177 main.go:141] libmachine: (no-preload-814599) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 11:37:00.877508   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 11:37:00.877523   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 11:37:00.877536   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home/jenkins
	I0520 11:37:00.877548   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home
	I0520 11:37:00.877559   55177 main.go:141] libmachine: (no-preload-814599) DBG | Skipping /home - not owner
	I0520 11:37:00.877602   55177 main.go:141] libmachine: (no-preload-814599) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 11:37:00.877631   55177 main.go:141] libmachine: (no-preload-814599) Creating domain...
	I0520 11:37:00.878796   55177 main.go:141] libmachine: (no-preload-814599) define libvirt domain using xml: 
	I0520 11:37:00.878815   55177 main.go:141] libmachine: (no-preload-814599) <domain type='kvm'>
	I0520 11:37:00.878845   55177 main.go:141] libmachine: (no-preload-814599)   <name>no-preload-814599</name>
	I0520 11:37:00.878869   55177 main.go:141] libmachine: (no-preload-814599)   <memory unit='MiB'>2200</memory>
	I0520 11:37:00.878879   55177 main.go:141] libmachine: (no-preload-814599)   <vcpu>2</vcpu>
	I0520 11:37:00.878890   55177 main.go:141] libmachine: (no-preload-814599)   <features>
	I0520 11:37:00.878899   55177 main.go:141] libmachine: (no-preload-814599)     <acpi/>
	I0520 11:37:00.878909   55177 main.go:141] libmachine: (no-preload-814599)     <apic/>
	I0520 11:37:00.878921   55177 main.go:141] libmachine: (no-preload-814599)     <pae/>
	I0520 11:37:00.878934   55177 main.go:141] libmachine: (no-preload-814599)     
	I0520 11:37:00.878947   55177 main.go:141] libmachine: (no-preload-814599)   </features>
	I0520 11:37:00.878963   55177 main.go:141] libmachine: (no-preload-814599)   <cpu mode='host-passthrough'>
	I0520 11:37:00.878983   55177 main.go:141] libmachine: (no-preload-814599)   
	I0520 11:37:00.878997   55177 main.go:141] libmachine: (no-preload-814599)   </cpu>
	I0520 11:37:00.879005   55177 main.go:141] libmachine: (no-preload-814599)   <os>
	I0520 11:37:00.879012   55177 main.go:141] libmachine: (no-preload-814599)     <type>hvm</type>
	I0520 11:37:00.879021   55177 main.go:141] libmachine: (no-preload-814599)     <boot dev='cdrom'/>
	I0520 11:37:00.879029   55177 main.go:141] libmachine: (no-preload-814599)     <boot dev='hd'/>
	I0520 11:37:00.879035   55177 main.go:141] libmachine: (no-preload-814599)     <bootmenu enable='no'/>
	I0520 11:37:00.879039   55177 main.go:141] libmachine: (no-preload-814599)   </os>
	I0520 11:37:00.879045   55177 main.go:141] libmachine: (no-preload-814599)   <devices>
	I0520 11:37:00.879065   55177 main.go:141] libmachine: (no-preload-814599)     <disk type='file' device='cdrom'>
	I0520 11:37:00.879082   55177 main.go:141] libmachine: (no-preload-814599)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/boot2docker.iso'/>
	I0520 11:37:00.879095   55177 main.go:141] libmachine: (no-preload-814599)       <target dev='hdc' bus='scsi'/>
	I0520 11:37:00.879103   55177 main.go:141] libmachine: (no-preload-814599)       <readonly/>
	I0520 11:37:00.879111   55177 main.go:141] libmachine: (no-preload-814599)     </disk>
	I0520 11:37:00.879124   55177 main.go:141] libmachine: (no-preload-814599)     <disk type='file' device='disk'>
	I0520 11:37:00.879137   55177 main.go:141] libmachine: (no-preload-814599)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 11:37:00.879152   55177 main.go:141] libmachine: (no-preload-814599)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/no-preload-814599.rawdisk'/>
	I0520 11:37:00.879165   55177 main.go:141] libmachine: (no-preload-814599)       <target dev='hda' bus='virtio'/>
	I0520 11:37:00.879195   55177 main.go:141] libmachine: (no-preload-814599)     </disk>
	I0520 11:37:00.879219   55177 main.go:141] libmachine: (no-preload-814599)     <interface type='network'>
	I0520 11:37:00.879261   55177 main.go:141] libmachine: (no-preload-814599)       <source network='mk-no-preload-814599'/>
	I0520 11:37:00.879277   55177 main.go:141] libmachine: (no-preload-814599)       <model type='virtio'/>
	I0520 11:37:00.879287   55177 main.go:141] libmachine: (no-preload-814599)     </interface>
	I0520 11:37:00.879298   55177 main.go:141] libmachine: (no-preload-814599)     <interface type='network'>
	I0520 11:37:00.879311   55177 main.go:141] libmachine: (no-preload-814599)       <source network='default'/>
	I0520 11:37:00.879322   55177 main.go:141] libmachine: (no-preload-814599)       <model type='virtio'/>
	I0520 11:37:00.879335   55177 main.go:141] libmachine: (no-preload-814599)     </interface>
	I0520 11:37:00.879344   55177 main.go:141] libmachine: (no-preload-814599)     <serial type='pty'>
	I0520 11:37:00.879352   55177 main.go:141] libmachine: (no-preload-814599)       <target port='0'/>
	I0520 11:37:00.879362   55177 main.go:141] libmachine: (no-preload-814599)     </serial>
	I0520 11:37:00.879371   55177 main.go:141] libmachine: (no-preload-814599)     <console type='pty'>
	I0520 11:37:00.879381   55177 main.go:141] libmachine: (no-preload-814599)       <target type='serial' port='0'/>
	I0520 11:37:00.879390   55177 main.go:141] libmachine: (no-preload-814599)     </console>
	I0520 11:37:00.879400   55177 main.go:141] libmachine: (no-preload-814599)     <rng model='virtio'>
	I0520 11:37:00.879409   55177 main.go:141] libmachine: (no-preload-814599)       <backend model='random'>/dev/random</backend>
	I0520 11:37:00.879417   55177 main.go:141] libmachine: (no-preload-814599)     </rng>
	I0520 11:37:00.879424   55177 main.go:141] libmachine: (no-preload-814599)     
	I0520 11:37:00.879441   55177 main.go:141] libmachine: (no-preload-814599)     
	I0520 11:37:00.879453   55177 main.go:141] libmachine: (no-preload-814599)   </devices>
	I0520 11:37:00.879463   55177 main.go:141] libmachine: (no-preload-814599) </domain>
	I0520 11:37:00.879474   55177 main.go:141] libmachine: (no-preload-814599) 
	I0520 11:37:00.883084   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:5d:e0:9c in network default
	I0520 11:37:00.883705   55177 main.go:141] libmachine: (no-preload-814599) Ensuring networks are active...
	I0520 11:37:00.883725   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:00.884498   55177 main.go:141] libmachine: (no-preload-814599) Ensuring network default is active
	I0520 11:37:00.884808   55177 main.go:141] libmachine: (no-preload-814599) Ensuring network mk-no-preload-814599 is active
	I0520 11:37:00.885345   55177 main.go:141] libmachine: (no-preload-814599) Getting domain xml...
	I0520 11:37:00.886162   55177 main.go:141] libmachine: (no-preload-814599) Creating domain...
	I0520 11:37:02.212827   55177 main.go:141] libmachine: (no-preload-814599) Waiting to get IP...
	I0520 11:37:02.214013   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:02.214548   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:02.214612   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:02.214536   55243 retry.go:31] will retry after 230.354164ms: waiting for machine to come up
	I0520 11:37:02.447322   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:02.447961   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:02.447989   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:02.447909   55243 retry.go:31] will retry after 280.443553ms: waiting for machine to come up
	I0520 11:37:02.730530   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:02.731050   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:02.731082   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:02.730990   55243 retry.go:31] will retry after 326.461214ms: waiting for machine to come up
	I0520 11:37:01.793799   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:37:01.796811   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:01.797255   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:01.797279   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:01.797553   54732 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0520 11:37:01.801923   54732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:37:01.814795   54732 kubeadm.go:877] updating cluster {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:37:01.814936   54732 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:37:01.815002   54732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:37:01.846574   54732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:37:01.846646   54732 ssh_runner.go:195] Run: which lz4
	I0520 11:37:01.850943   54732 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:37:01.855547   54732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:37:01.855574   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 11:37:03.680746   54732 crio.go:462] duration metric: took 1.829828922s to copy over tarball
	I0520 11:37:03.680826   54732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	W0520 11:37:00.911127   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:00Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:00Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:00.911147   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:00.911162   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:00.951935   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:00.951964   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:03.503295   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:03.524005   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:03.524072   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:03.587506   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:03.587532   49001 cri.go:89] found id: ""
	I0520 11:37:03.587541   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:03.587596   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.593553   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:03.593623   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:03.650465   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:03.650489   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:03.650495   49001 cri.go:89] found id: ""
	I0520 11:37:03.650510   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:03.650566   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.656299   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.662229   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:03.662294   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:03.714388   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:03.714413   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:03.714418   49001 cri.go:89] found id: ""
	I0520 11:37:03.714426   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:03.714479   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.719414   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.725329   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:03.725402   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:03.767484   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:03.767509   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:03.767514   49001 cri.go:89] found id: ""
	I0520 11:37:03.767523   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:03.767580   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.772446   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.778020   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:03.778089   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:03.816734   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:03.816761   49001 cri.go:89] found id: ""
	I0520 11:37:03.816770   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:03.816830   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.821555   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:03.821619   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:03.865361   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:03.865393   49001 cri.go:89] found id: ""
	I0520 11:37:03.865402   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:03.865462   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.872164   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:03.872238   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:03.917750   49001 cri.go:89] found id: ""
	I0520 11:37:03.917778   49001 logs.go:276] 0 containers: []
	W0520 11:37:03.917794   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:03.917812   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:03.917830   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:04.016511   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:04.016561   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:04.036288   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:04.036321   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:04.077030   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:04Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:04Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:04.077051   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:04.077063   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:04.129084   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:04.129122   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:04.212662   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:04.212682   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:04.212694   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:04.257399   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:04.257428   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:04.353803   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:04.353848   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:04.426381   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:04.426412   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:04.511533   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:04.511572   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:04.554295   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:04.554322   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:04.605411   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:04.605452   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:04.653059   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:04.653084   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:05.025930   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:05.025987   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:03.059936   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:03.060553   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:03.060584   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:03.060493   55243 retry.go:31] will retry after 426.155065ms: waiting for machine to come up
	I0520 11:37:03.488253   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:03.488813   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:03.488845   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:03.488768   55243 retry.go:31] will retry after 654.064749ms: waiting for machine to come up
	I0520 11:37:04.144322   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:04.144912   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:04.144948   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:04.144823   55243 retry.go:31] will retry after 883.921739ms: waiting for machine to come up
	I0520 11:37:05.030283   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:05.030696   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:05.030732   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:05.030660   55243 retry.go:31] will retry after 1.024526338s: waiting for machine to come up
	I0520 11:37:06.056497   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:06.057125   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:06.057164   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:06.057059   55243 retry.go:31] will retry after 1.473243532s: waiting for machine to come up
	I0520 11:37:07.532663   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:07.533190   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:07.533215   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:07.533139   55243 retry.go:31] will retry after 1.168974888s: waiting for machine to come up
	I0520 11:37:06.452881   54732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.77202524s)
	I0520 11:37:06.452914   54732 crio.go:469] duration metric: took 2.77213672s to extract the tarball
	I0520 11:37:06.452948   54732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:37:06.496108   54732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:37:06.542385   54732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:37:06.542414   54732 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:37:06.542466   54732 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:06.542500   54732 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.542517   54732 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:06.542528   54732 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.542579   54732 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 11:37:06.542620   54732 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.542508   54732 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.542588   54732 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 11:37:06.543881   54732 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.543932   54732 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.544060   54732 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 11:37:06.544066   54732 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.544057   54732 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:06.544093   54732 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.544088   54732 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 11:37:06.544234   54732 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:06.689722   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.692704   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.700348   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.715480   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.736341   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 11:37:06.740710   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:06.745468   54732 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 11:37:06.745506   54732 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.745544   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.755418   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 11:37:06.845909   54732 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 11:37:06.845942   54732 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 11:37:06.845961   54732 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.845977   54732 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.846011   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.846035   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.893251   54732 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 11:37:06.893310   54732 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 11:37:06.893290   54732 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 11:37:06.893347   54732 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.893361   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.893406   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.904838   54732 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 11:37:06.904904   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.904920   54732 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:06.904968   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.910151   54732 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 11:37:06.910188   54732 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 11:37:06.910215   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.910228   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.910230   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.910288   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 11:37:06.910363   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.912545   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:07.003917   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 11:37:07.021998   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 11:37:07.038760   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 11:37:07.038853   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 11:37:07.038887   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 11:37:07.039429   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 11:37:07.042472   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 11:37:07.075138   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 11:37:07.461848   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:07.605217   54732 cache_images.go:92] duration metric: took 1.062783538s to LoadCachedImages
	W0520 11:37:07.605318   54732 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0520 11:37:07.605336   54732 kubeadm.go:928] updating node { 192.168.83.100 8443 v1.20.0 crio true true} ...
	I0520 11:37:07.605472   54732 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:37:07.605583   54732 ssh_runner.go:195] Run: crio config
	I0520 11:37:07.668258   54732 cni.go:84] Creating CNI manager for ""
	I0520 11:37:07.668286   54732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:37:07.668310   54732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:37:07.668337   54732 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.100 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210420 NodeName:old-k8s-version-210420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:37:07.668520   54732 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210420"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:37:07.668591   54732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:37:07.679647   54732 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:37:07.679713   54732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:37:07.689918   54732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0520 11:37:07.711056   54732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:37:07.730271   54732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0520 11:37:07.751969   54732 ssh_runner.go:195] Run: grep 192.168.83.100	control-plane.minikube.internal$ /etc/hosts
	I0520 11:37:07.756727   54732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:37:07.771312   54732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:37:07.893449   54732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:37:07.911536   54732 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420 for IP: 192.168.83.100
	I0520 11:37:07.911562   54732 certs.go:194] generating shared ca certs ...
	I0520 11:37:07.911582   54732 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:07.911754   54732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:37:07.911807   54732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:37:07.911819   54732 certs.go:256] generating profile certs ...
	I0520 11:37:07.911894   54732 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key
	I0520 11:37:07.911913   54732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt with IP's: []
	I0520 11:37:08.010690   54732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt ...
	I0520 11:37:08.010774   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: {Name:mk37e4d8e6b70b9df4bd1f33a3bb1ce774741c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.010992   54732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key ...
	I0520 11:37:08.011048   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key: {Name:mkef658583b929f627b0a4742ad863dab8569cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.011194   54732 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9
	I0520 11:37:08.011251   54732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt.a7d5d0f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.100]
	I0520 11:37:08.123361   54732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt.a7d5d0f9 ...
	I0520 11:37:08.123391   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt.a7d5d0f9: {Name:mk126bf7eda8dbc6aeee644e3e54901634a644ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.123558   54732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9 ...
	I0520 11:37:08.123577   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9: {Name:mkf6c45a2b2e90a3cb664c8a2d9327ba46d6c43e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.123773   54732 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt.a7d5d0f9 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt
	I0520 11:37:08.123864   54732 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key
	I0520 11:37:08.123915   54732 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key
	I0520 11:37:08.123933   54732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt with IP's: []
	I0520 11:37:08.287603   54732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt ...
	I0520 11:37:08.287636   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt: {Name:mk5a61fe0b766f2d87c141b576ed804ead7b2cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.287789   54732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key ...
	I0520 11:37:08.287803   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key: {Name:mk6ecec165027e23fe429021af4b2049861c5527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.287990   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:37:08.288041   54732 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:37:08.288052   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:37:08.288094   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:37:08.288129   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:37:08.288162   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:37:08.288214   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:37:08.288774   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:37:08.323616   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:37:08.353034   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:37:08.383245   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:37:08.412983   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 11:37:08.443285   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:37:08.474262   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:37:08.507794   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 11:37:08.542166   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:37:08.572287   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:37:08.603134   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:37:08.633088   54732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:37:08.650484   54732 ssh_runner.go:195] Run: openssl version
	I0520 11:37:08.657729   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:37:08.669476   54732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:08.674540   54732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:08.674594   54732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:08.682466   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:37:08.694751   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:37:08.709664   54732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:37:08.715955   54732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:37:08.716013   54732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:37:08.724137   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:37:08.736195   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:37:08.747804   54732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:37:08.754314   54732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:37:08.754379   54732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:37:08.762305   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:37:08.777044   54732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:37:08.782996   54732 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 11:37:08.783051   54732 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:37:08.783153   54732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:37:08.783206   54732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:37:08.832683   54732 cri.go:89] found id: ""
	I0520 11:37:08.832748   54732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 11:37:08.843324   54732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:37:08.853560   54732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:37:08.863070   54732 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:37:08.863089   54732 kubeadm.go:156] found existing configuration files:
	
	I0520 11:37:08.863140   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:37:08.876732   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:37:08.876795   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:37:08.888527   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:37:08.904546   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:37:08.904618   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:37:08.920506   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:37:08.932922   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:37:08.932998   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:37:08.954646   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:37:08.966058   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:37:08.966132   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:37:08.978052   54732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:37:09.252760   54732 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:37:07.576280   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:07.591398   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:07.591463   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:07.645086   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:07.645121   49001 cri.go:89] found id: ""
	I0520 11:37:07.645131   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:07.645185   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.651098   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:07.651160   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:07.697226   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:07.697254   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:07.697259   49001 cri.go:89] found id: ""
	I0520 11:37:07.697280   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:07.697350   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.702815   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.708988   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:07.709066   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:07.753589   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:07.753612   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:07.753617   49001 cri.go:89] found id: ""
	I0520 11:37:07.753625   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:07.753673   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.758494   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.762958   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:07.763018   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:07.806489   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:07.806519   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:07.806526   49001 cri.go:89] found id: ""
	I0520 11:37:07.806536   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:07.806594   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.813114   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.818058   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:07.818128   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:07.862362   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:07.862389   49001 cri.go:89] found id: ""
	I0520 11:37:07.862398   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:07.862450   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.867347   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:07.867414   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:07.906940   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:07.906966   49001 cri.go:89] found id: ""
	I0520 11:37:07.906975   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:07.907033   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.912776   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:07.912845   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:07.961828   49001 cri.go:89] found id: ""
	I0520 11:37:07.961908   49001 logs.go:276] 0 containers: []
	W0520 11:37:07.961929   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:07.961955   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:07.962001   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:08.333365   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:08.333400   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:08.417686   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:08.417715   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:08.417733   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:08.488659   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:08.488690   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:08.536273   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:08Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:08Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:08.536294   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:08.536308   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:08.628212   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:08.628247   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:08.680938   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:08.680967   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:08.725538   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:08.725566   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:08.770191   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:08.770213   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:08.871237   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:08.871265   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:08.892521   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:08.892556   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:08.948426   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:08.948468   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:09.002843   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:09.002875   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:09.045892   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:09.045925   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:08.703262   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:08.703812   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:08.703838   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:08.703768   55243 retry.go:31] will retry after 2.0654703s: waiting for machine to come up
	I0520 11:37:10.771048   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:10.771624   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:10.771660   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:10.771564   55243 retry.go:31] will retry after 1.968841858s: waiting for machine to come up
	I0520 11:37:11.599472   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:11.624938   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:11.625025   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:11.684315   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:11.684339   49001 cri.go:89] found id: ""
	I0520 11:37:11.684348   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:11.684406   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.690414   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:11.690483   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:11.736757   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:11.736789   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:11.736794   49001 cri.go:89] found id: ""
	I0520 11:37:11.736813   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:11.736877   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.741539   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.746899   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:11.746952   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:11.788160   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:11.788183   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:11.788189   49001 cri.go:89] found id: ""
	I0520 11:37:11.788198   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:11.788246   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.792882   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.797112   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:11.797200   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:11.843558   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:11.843584   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:11.843587   49001 cri.go:89] found id: ""
	I0520 11:37:11.843594   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:11.843639   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.849537   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.855114   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:11.855179   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:11.898529   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:11.898556   49001 cri.go:89] found id: ""
	I0520 11:37:11.898566   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:11.898629   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.903372   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:11.903487   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:11.947460   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:11.947488   49001 cri.go:89] found id: ""
	I0520 11:37:11.947497   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:11.947556   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.952747   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:11.952819   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:12.001075   49001 cri.go:89] found id: ""
	I0520 11:37:12.001103   49001 logs.go:276] 0 containers: []
	W0520 11:37:12.001112   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:12.001131   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:12.001146   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:12.020264   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:12.020297   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:12.071740   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:12.071774   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:12.158604   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:12.158645   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:12.461694   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:12.461736   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:12.502521   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:12.502558   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:12.546362   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:12.546401   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:12.595362   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:12.595392   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:12.688172   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:12.688207   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:12.762044   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:12.762074   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:12.762089   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:12.844573   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:12.844605   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:12.888586   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:12.888616   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:12.927295   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:12Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:12Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:12.927322   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:12.927333   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:12.977584   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:12.977616   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:15.516297   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:15.531544   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:15.531619   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:15.578203   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:15.578229   49001 cri.go:89] found id: ""
	I0520 11:37:15.578239   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:15.578297   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.583441   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:15.583512   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:15.624751   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:15.624770   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:15.624775   49001 cri.go:89] found id: ""
	I0520 11:37:15.624781   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:15.624826   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.629294   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.633366   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:15.633449   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:15.671090   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:15.671111   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:15.671114   49001 cri.go:89] found id: ""
	I0520 11:37:15.671121   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:15.671164   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.675459   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.679636   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:15.679686   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:15.714170   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:15.714188   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:15.714194   49001 cri.go:89] found id: ""
	I0520 11:37:15.714203   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:15.714248   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.718288   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.722201   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:15.722241   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:15.759239   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:15.759260   49001 cri.go:89] found id: ""
	I0520 11:37:15.759271   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:15.759325   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.763284   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:15.763332   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:15.798264   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:15.798287   49001 cri.go:89] found id: ""
	I0520 11:37:15.798297   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:15.798351   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.802374   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:15.802435   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:15.838958   49001 cri.go:89] found id: ""
	I0520 11:37:15.838992   49001 logs.go:276] 0 containers: []
	W0520 11:37:15.839004   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:15.839022   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:15.839046   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:37:12.742167   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:12.743228   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:12.743255   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:12.742612   55243 retry.go:31] will retry after 3.334571715s: waiting for machine to come up
	I0520 11:37:16.079189   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:16.079845   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:16.079870   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:16.079798   55243 retry.go:31] will retry after 3.382891723s: waiting for machine to come up
	W0520 11:37:15.925071   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:15.925095   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:15.925110   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:16.011780   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:16.011824   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:16.046849   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:16Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:16Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:16.046873   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:16.046887   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:16.081989   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:16.082018   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:16.116973   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:16.117003   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:16.207084   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:16.207113   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:16.534590   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:16.534632   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:16.549811   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:16.549835   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:16.590252   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:16.590293   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:16.628060   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:16.628094   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:16.679118   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:16.679147   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:16.724724   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:16.724754   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:16.798204   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:16.798235   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:19.342081   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:19.355673   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:19.355730   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:19.393452   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:19.393478   49001 cri.go:89] found id: ""
	I0520 11:37:19.393485   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:19.393531   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.398052   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:19.398112   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:19.438919   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:19.438942   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:19.438948   49001 cri.go:89] found id: ""
	I0520 11:37:19.438957   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:19.439010   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.443824   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.448024   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:19.448075   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:19.492262   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:19.492285   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:19.492291   49001 cri.go:89] found id: ""
	I0520 11:37:19.492299   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:19.492343   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.496462   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.500519   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:19.500564   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:19.535861   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:19.535888   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:19.535894   49001 cri.go:89] found id: ""
	I0520 11:37:19.535901   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:19.535952   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.540188   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.544763   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:19.544822   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:19.581319   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:19.581344   49001 cri.go:89] found id: ""
	I0520 11:37:19.581355   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:19.581414   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.585589   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:19.585654   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:19.622255   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:19.622274   49001 cri.go:89] found id: ""
	I0520 11:37:19.622281   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:19.622326   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.626439   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:19.626492   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:19.662188   49001 cri.go:89] found id: ""
	I0520 11:37:19.662217   49001 logs.go:276] 0 containers: []
	W0520 11:37:19.662227   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:19.662247   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:19.662262   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:19.699894   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:19.699922   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:19.734918   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:19.734944   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:19.775359   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:19Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:19Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:19.775378   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:19.775390   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:19.810562   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:19.810587   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:20.113343   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:20.113373   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:20.128320   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:20.128343   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:20.201970   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:20.201987   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:20.201999   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:20.243148   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:20.243175   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:20.286250   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:20.286277   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:20.380349   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:20.380381   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:20.449323   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:20.449356   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:20.490813   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:20.490842   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:20.566142   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:20.566175   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:19.466211   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:19.466699   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:19.466741   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:19.466659   55243 retry.go:31] will retry after 4.541270951s: waiting for machine to come up
	I0520 11:37:23.103480   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:23.116900   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:23.116994   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:23.151961   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:23.151981   49001 cri.go:89] found id: ""
	I0520 11:37:23.151999   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:23.152055   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.156424   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:23.156480   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:23.189983   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:23.190002   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:23.190007   49001 cri.go:89] found id: ""
	I0520 11:37:23.190013   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:23.190059   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.196223   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.200761   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:23.200820   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:23.237596   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:23.237618   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:23.237622   49001 cri.go:89] found id: ""
	I0520 11:37:23.237628   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:23.237682   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.242013   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.246818   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:23.246876   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:23.289768   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:23.289793   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:23.289797   49001 cri.go:89] found id: ""
	I0520 11:37:23.289803   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:23.289856   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.294246   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.298360   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:23.298412   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:23.335357   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:23.335374   49001 cri.go:89] found id: ""
	I0520 11:37:23.335381   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:23.335424   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.339702   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:23.339765   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:23.375990   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:23.376016   49001 cri.go:89] found id: ""
	I0520 11:37:23.376025   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:23.376076   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.380670   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:23.380805   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:23.421857   49001 cri.go:89] found id: ""
	I0520 11:37:23.421879   49001 logs.go:276] 0 containers: []
	W0520 11:37:23.421886   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:23.421900   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:23.421912   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:23.455095   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:23Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:23Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:23.455119   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:23.455130   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:23.502579   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:23.502603   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:23.538776   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:23.538800   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:23.574494   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:23.574516   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:23.883891   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:23.883931   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:23.950164   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:23.950212   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:23.950231   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:23.991907   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:23.991939   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:24.030829   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:24.030861   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:24.112185   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:24.112221   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:24.151754   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:24.151783   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:24.254484   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:24.254517   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:24.270356   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:24.270383   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:24.341080   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:24.341112   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:24.010072   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.010656   55177 main.go:141] libmachine: (no-preload-814599) Found IP for machine: 192.168.39.185
	I0520 11:37:24.010677   55177 main.go:141] libmachine: (no-preload-814599) Reserving static IP address...
	I0520 11:37:24.010691   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has current primary IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.011059   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"} in network mk-no-preload-814599
	I0520 11:37:24.085917   55177 main.go:141] libmachine: (no-preload-814599) DBG | Getting to WaitForSSH function...
	I0520 11:37:24.085942   55177 main.go:141] libmachine: (no-preload-814599) Reserved static IP address: 192.168.39.185
	I0520 11:37:24.085956   55177 main.go:141] libmachine: (no-preload-814599) Waiting for SSH to be available...
	I0520 11:37:24.088429   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.088791   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.088818   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.088964   55177 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH client type: external
	I0520 11:37:24.088995   55177 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa (-rw-------)
	I0520 11:37:24.089044   55177 main.go:141] libmachine: (no-preload-814599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:37:24.089068   55177 main.go:141] libmachine: (no-preload-814599) DBG | About to run SSH command:
	I0520 11:37:24.089085   55177 main.go:141] libmachine: (no-preload-814599) DBG | exit 0
	I0520 11:37:24.221040   55177 main.go:141] libmachine: (no-preload-814599) DBG | SSH cmd err, output: <nil>: 
	I0520 11:37:24.221350   55177 main.go:141] libmachine: (no-preload-814599) KVM machine creation complete!
	I0520 11:37:24.221627   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetConfigRaw
	I0520 11:37:24.222147   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:24.222324   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:24.222459   55177 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 11:37:24.222471   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:37:24.223825   55177 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 11:37:24.223840   55177 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 11:37:24.223845   55177 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 11:37:24.223870   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:24.226250   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.226582   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.226601   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.226724   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:24.226919   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.227095   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.227254   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:24.227432   55177 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:24.227633   55177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:37:24.227644   55177 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 11:37:24.340138   55177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:37:24.340175   55177 main.go:141] libmachine: Detecting the provisioner...
	I0520 11:37:24.340186   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:24.343303   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.343712   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.343746   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.343946   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:24.344140   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.344305   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.344456   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:24.344649   55177 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:24.344851   55177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:37:24.344864   55177 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 11:37:24.457900   55177 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 11:37:24.457989   55177 main.go:141] libmachine: found compatible host: buildroot
	I0520 11:37:24.458003   55177 main.go:141] libmachine: Provisioning with buildroot...
	I0520 11:37:24.458014   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:37:24.458280   55177 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:37:24.458305   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:37:24.458497   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:24.461144   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.461457   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.461481   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.461630   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:24.461774   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.461900   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.462049   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:24.462189   55177 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:24.462354   55177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:37:24.462366   55177 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-814599 && echo "no-preload-814599" | sudo tee /etc/hostname
	I0520 11:37:24.587986   55177 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-814599
	
	I0520 11:37:24.588014   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:24.590785   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.591209   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.591246   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.591429   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:24.591606   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.591761   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.591885   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:24.592097   55177 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:24.592338   55177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:37:24.592364   55177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-814599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-814599/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-814599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:37:24.714008   55177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:37:24.714043   55177 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:37:24.714099   55177 buildroot.go:174] setting up certificates
	I0520 11:37:24.714114   55177 provision.go:84] configureAuth start
	I0520 11:37:24.714130   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:37:24.714411   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:37:24.716917   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.717360   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.717390   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.717511   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:24.719673   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.719999   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.720020   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.720202   55177 provision.go:143] copyHostCerts
	I0520 11:37:24.720260   55177 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:37:24.720274   55177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:37:24.720335   55177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:37:24.720442   55177 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:37:24.720452   55177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:37:24.720483   55177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:37:24.720553   55177 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:37:24.720563   55177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:37:24.720593   55177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:37:24.720661   55177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.no-preload-814599 san=[127.0.0.1 192.168.39.185 localhost minikube no-preload-814599]
	I0520 11:37:24.953337   55177 provision.go:177] copyRemoteCerts
	I0520 11:37:24.953397   55177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:37:24.953421   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:24.956029   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.956354   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.956385   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.956498   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:24.956687   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.956852   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:24.956973   55177 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:37:25.043540   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:37:25.068520   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0520 11:37:25.092950   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:37:25.118422   55177 provision.go:87] duration metric: took 404.291795ms to configureAuth
	I0520 11:37:25.118446   55177 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:37:25.118636   55177 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:37:25.118715   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:25.121131   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.121531   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.121561   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.121698   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:25.121891   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.122044   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.122181   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:25.122308   55177 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:25.122513   55177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:37:25.122530   55177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:37:25.396003   55177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:37:25.396034   55177 main.go:141] libmachine: Checking connection to Docker...
	I0520 11:37:25.396046   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetURL
	I0520 11:37:25.397414   55177 main.go:141] libmachine: (no-preload-814599) DBG | Using libvirt version 6000000
	I0520 11:37:25.399913   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.400326   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.400353   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.400501   55177 main.go:141] libmachine: Docker is up and running!
	I0520 11:37:25.400519   55177 main.go:141] libmachine: Reticulating splines...
	I0520 11:37:25.400526   55177 client.go:171] duration metric: took 25.079325967s to LocalClient.Create
	I0520 11:37:25.400550   55177 start.go:167] duration metric: took 25.079391411s to libmachine.API.Create "no-preload-814599"
	I0520 11:37:25.400560   55177 start.go:293] postStartSetup for "no-preload-814599" (driver="kvm2")
	I0520 11:37:25.400571   55177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:37:25.400586   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:25.400801   55177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:37:25.400824   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:25.403000   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.403301   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.403335   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.403481   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:25.403645   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.403811   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:25.403941   55177 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:37:25.491378   55177 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:37:25.495807   55177 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:37:25.495824   55177 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:37:25.495885   55177 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:37:25.495952   55177 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:37:25.496037   55177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:37:25.505374   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:37:25.529496   55177 start.go:296] duration metric: took 128.923149ms for postStartSetup
	I0520 11:37:25.529546   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetConfigRaw
	I0520 11:37:25.530242   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:37:25.532725   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.533176   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.533200   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.533439   55177 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json ...
	I0520 11:37:25.533628   55177 start.go:128] duration metric: took 25.239343412s to createHost
	I0520 11:37:25.533656   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:25.535937   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.536315   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.536340   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.536502   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:25.536692   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.536855   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.536987   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:25.537165   55177 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:25.537315   55177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:37:25.537329   55177 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:37:25.651171   55177 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205045.624370486
	
	I0520 11:37:25.651195   55177 fix.go:216] guest clock: 1716205045.624370486
	I0520 11:37:25.651205   55177 fix.go:229] Guest: 2024-05-20 11:37:25.624370486 +0000 UTC Remote: 2024-05-20 11:37:25.533640621 +0000 UTC m=+27.826588786 (delta=90.729865ms)
	I0520 11:37:25.651230   55177 fix.go:200] guest clock delta is within tolerance: 90.729865ms
	I0520 11:37:25.651237   55177 start.go:83] releasing machines lock for "no-preload-814599", held for 25.357113439s
	I0520 11:37:25.651262   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:25.651516   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:37:25.654171   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.654501   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.654531   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.654678   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:25.655149   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:25.655308   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:25.655385   55177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:37:25.655435   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:25.655512   55177 ssh_runner.go:195] Run: cat /version.json
	I0520 11:37:25.655535   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:25.657897   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.658222   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.658245   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.658262   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.658394   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:25.658571   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.658728   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:25.658736   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.658764   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.658920   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:25.658921   55177 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:37:25.659067   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.659212   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:25.659389   55177 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	W0520 11:37:25.763092   55177 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:37:25.763157   55177 ssh_runner.go:195] Run: systemctl --version
	I0520 11:37:25.769613   55177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:37:25.930658   55177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:37:25.936855   55177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:37:25.936942   55177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:37:25.953422   55177 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:37:25.953445   55177 start.go:494] detecting cgroup driver to use...
	I0520 11:37:25.953563   55177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:37:25.971466   55177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:37:25.984671   55177 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:37:25.984742   55177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:37:25.998074   55177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:37:26.011500   55177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:37:26.126412   55177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:37:26.280949   55177 docker.go:233] disabling docker service ...
	I0520 11:37:26.281031   55177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:37:26.295029   55177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:37:26.307980   55177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:37:26.422857   55177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:37:26.541328   55177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:37:26.556143   55177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:37:26.574739   55177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:37:26.574794   55177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.584821   55177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:37:26.584874   55177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.594783   55177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.604910   55177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.614901   55177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:37:26.625141   55177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.634960   55177 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.651910   55177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.661857   55177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:37:26.671034   55177 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:37:26.671101   55177 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:37:26.683517   55177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:37:26.692709   55177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:37:26.800692   55177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:37:26.941265   55177 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:37:26.941346   55177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:37:26.946971   55177 start.go:562] Will wait 60s for crictl version
	I0520 11:37:26.947023   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:26.951327   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:37:26.992538   55177 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:37:26.992627   55177 ssh_runner.go:195] Run: crio --version
	I0520 11:37:27.025584   55177 ssh_runner.go:195] Run: crio --version
	I0520 11:37:27.057900   55177 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:37:27.059089   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:37:27.062968   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:27.063491   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:27.063522   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:27.063790   55177 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 11:37:27.068170   55177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:37:27.083225   55177 kubeadm.go:877] updating cluster {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:37:27.083395   55177 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:37:27.083456   55177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:37:27.129466   55177 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:37:27.129494   55177 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:37:27.129557   55177 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:27.129843   55177 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:37:27.129863   55177 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:37:27.129849   55177 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 11:37:27.129934   55177 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:37:27.130002   55177 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:37:27.130039   55177 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:37:27.130008   55177 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:37:27.131103   55177 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:27.131552   55177 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 11:37:27.131786   55177 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:37:27.131923   55177 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:37:27.131944   55177 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:37:27.132036   55177 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:37:27.132092   55177 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:37:27.132634   55177 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:37:27.246628   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:37:27.251235   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:37:27.263946   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:37:27.264260   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0520 11:37:27.271060   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:37:27.282205   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:37:27.283651   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0520 11:37:27.367229   55177 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0520 11:37:27.367284   55177 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:37:27.367339   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.373568   55177 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0520 11:37:27.373620   55177 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:37:27.373670   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.440049   55177 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0520 11:37:27.440097   55177 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:37:27.440145   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.457705   55177 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I0520 11:37:27.457750   55177 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I0520 11:37:27.457795   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.463903   55177 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0520 11:37:27.463943   55177 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:37:27.463978   55177 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0520 11:37:27.463989   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.464010   55177 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:37:27.464024   55177 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0520 11:37:27.464055   55177 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:37:27.464065   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:37:27.464097   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:37:27.464099   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:37:27.464112   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.464056   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.464143   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I0520 11:37:27.540796   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:37:27.556746   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 11:37:27.556886   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:37:27.564187   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:37:27.564281   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0520 11:37:27.564350   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 11:37:27.564364   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I0520 11:37:27.564374   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 11:37:27.564429   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:37:27.564435   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:37:27.564434   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0520 11:37:27.613351   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0520 11:37:27.613394   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I0520 11:37:27.613507   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 11:37:27.613591   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:37:27.672183   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 11:37:27.672205   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I0520 11:37:27.672250   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I0520 11:37:27.672300   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.30.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.30.1': No such file or directory
	I0520 11:37:27.672305   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:37:27.672325   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 --> /var/lib/minikube/images/kube-apiserver_v1.30.1 (32776704 bytes)
	I0520 11:37:27.672355   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0520 11:37:27.672406   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.30.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.30.1': No such file or directory
	I0520 11:37:27.672418   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:37:27.672426   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.30.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.30.1': No such file or directory
	I0520 11:37:27.672433   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 --> /var/lib/minikube/images/kube-controller-manager_v1.30.1 (31146496 bytes)
	I0520 11:37:27.672447   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 --> /var/lib/minikube/images/kube-scheduler_v1.30.1 (19324928 bytes)
	I0520 11:37:27.718971   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.30.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.30.1': No such file or directory
	I0520 11:37:27.719011   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 --> /var/lib/minikube/images/kube-proxy_v1.30.1 (29022720 bytes)
	I0520 11:37:27.726500   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.12-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.12-0': No such file or directory
	I0520 11:37:27.726539   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 --> /var/lib/minikube/images/etcd_3.5.12-0 (57244160 bytes)
	I0520 11:37:26.887907   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:26.902251   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:26.902325   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:26.945102   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:26.945124   49001 cri.go:89] found id: ""
	I0520 11:37:26.945133   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:26.945197   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:26.949903   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:26.949960   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:26.988144   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:26.988167   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:26.988173   49001 cri.go:89] found id: ""
	I0520 11:37:26.988180   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:26.988238   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:26.992745   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:26.998251   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:26.998324   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:27.040952   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:27.040980   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:27.040987   49001 cri.go:89] found id: ""
	I0520 11:37:27.040996   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:27.041057   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.045674   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.049831   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:27.049903   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:27.099343   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:27.099366   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:27.099372   49001 cri.go:89] found id: ""
	I0520 11:37:27.099380   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:27.099432   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.104037   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.109906   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:27.109987   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:27.154017   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:27.154035   49001 cri.go:89] found id: ""
	I0520 11:37:27.154044   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:27.154105   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.158162   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:27.158221   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:27.195164   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:27.195182   49001 cri.go:89] found id: ""
	I0520 11:37:27.195188   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:27.195233   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.199280   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:27.199327   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:27.236823   49001 cri.go:89] found id: ""
	I0520 11:37:27.236846   49001 logs.go:276] 0 containers: []
	W0520 11:37:27.236858   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:27.236870   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:27.236882   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:27.314806   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:27.314851   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:27.363216   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:27.363242   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:27.402395   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:27.402420   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:27.729526   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:27.729553   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:27.767055   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:27.767082   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:27.882099   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:27.882129   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:27.899639   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:27.899668   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:27.958550   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:27.958646   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:28.061707   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:28.061748   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:28.116585   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:28.116622   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:28.211728   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:28.211753   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:28.211803   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:28.266215   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:28Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:28Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:28.266243   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:28.266259   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:28.306859   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:28.306886   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:30.857904   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:30.873289   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:30.873357   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:27.779068   55177 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.9
	I0520 11:37:27.779137   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.9
	I0520 11:37:28.046088   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:28.575621   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
	I0520 11:37:28.575669   55177 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:37:28.575675   55177 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0520 11:37:28.575707   55177 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:28.575720   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:37:28.575742   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:30.551513   55177 ssh_runner.go:235] Completed: which crictl: (1.975744229s)
	I0520 11:37:30.551553   55177 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.975816608s)
	I0520 11:37:30.551568   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0520 11:37:30.551591   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:30.551604   55177 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:37:30.551663   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:37:30.598355   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 11:37:30.598457   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:37:30.908947   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:30.950363   49001 cri.go:89] found id: ""
	I0520 11:37:30.950387   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:30.950450   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:30.955960   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:30.956043   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:30.992343   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:30.992372   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:30.992378   49001 cri.go:89] found id: ""
	I0520 11:37:30.992389   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:30.992447   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:30.996956   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.000763   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:31.000823   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:31.042185   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:31.042209   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:31.042215   49001 cri.go:89] found id: ""
	I0520 11:37:31.042223   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:31.042274   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.046245   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.050282   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:31.050353   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:31.089038   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:31.089073   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:31.089079   49001 cri.go:89] found id: ""
	I0520 11:37:31.089088   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:31.089143   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.093258   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.097390   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:31.097456   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:31.140174   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:31.140201   49001 cri.go:89] found id: ""
	I0520 11:37:31.140210   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:31.140266   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.144917   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:31.145004   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:31.180759   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:31.180777   49001 cri.go:89] found id: ""
	I0520 11:37:31.180784   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:31.180833   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.184994   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:31.185048   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:31.223192   49001 cri.go:89] found id: ""
	I0520 11:37:31.223213   49001 logs.go:276] 0 containers: []
	W0520 11:37:31.223220   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:31.223233   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:31.223257   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:31.266593   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:31.266624   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:31.342228   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:31.342247   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:31.342259   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:31.387893   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:31.387927   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:31.427405   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:31.427434   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:31.746483   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:31.746525   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:31.782031   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:31Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:31Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:31.782050   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:31.782062   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:31.820977   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:31.821005   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:31.913236   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:31.913276   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:31.980608   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:31.980648   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:32.060069   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:32.060104   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:32.075967   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:32.076008   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:32.128455   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:32.128483   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:32.183253   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:32.183283   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:34.729497   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:34.746609   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:34.746725   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:34.785914   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:34.785937   49001 cri.go:89] found id: ""
	I0520 11:37:34.785946   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:34.786002   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.790445   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:34.790512   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:34.829697   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:34.829722   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:34.829727   49001 cri.go:89] found id: ""
	I0520 11:37:34.829736   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:34.829790   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.835328   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.839270   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:34.839319   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:34.875153   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:34.875178   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:34.875185   49001 cri.go:89] found id: ""
	I0520 11:37:34.875192   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:34.875247   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.879788   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.884086   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:34.884141   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:34.922741   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:34.922767   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:34.922772   49001 cri.go:89] found id: ""
	I0520 11:37:34.922781   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:34.922834   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.927043   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.931011   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:34.931062   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:34.965434   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:34.965454   49001 cri.go:89] found id: ""
	I0520 11:37:34.965462   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:34.965515   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.969567   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:34.969629   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:35.008101   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:35.008125   49001 cri.go:89] found id: ""
	I0520 11:37:35.008133   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:35.008193   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:35.012552   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:35.012612   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:35.050538   49001 cri.go:89] found id: ""
	I0520 11:37:35.050572   49001 logs.go:276] 0 containers: []
	W0520 11:37:35.050584   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:35.050603   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:35.050616   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:35.390893   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:35.390930   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:35.433798   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:35.433835   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:35.514603   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:35.514631   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:35.514646   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:35.577338   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:35.577374   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:35.658557   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:35.658593   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:35.699934   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:35.699968   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:35.745328   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:35.745355   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:35.762235   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:35.762260   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:35.857150   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:35.857195   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:33.445961   55177 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (2.894267855s)
	I0520 11:37:33.445986   55177 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.847502572s)
	I0520 11:37:33.445998   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0520 11:37:33.446025   55177 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:37:33.446059   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0520 11:37:33.446080   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:37:33.446107   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0520 11:37:35.729539   55177 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.283433909s)
	I0520 11:37:35.729568   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0520 11:37:35.729599   55177 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:37:35.729650   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:37:37.710270   55177 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.980593085s)
	I0520 11:37:37.710299   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0520 11:37:37.710323   55177 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:37:37.710373   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:37:35.905028   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:35.905069   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:35.941942   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:35Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:35Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:35.941975   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:35.941990   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:35.979328   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:35.979357   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:36.020549   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:36.020577   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:38.573432   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:38.587595   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:38.587679   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:38.630679   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:38.630705   49001 cri.go:89] found id: ""
	I0520 11:37:38.630713   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:38.630769   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.636007   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:38.636114   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:38.674472   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:38.674500   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:38.674507   49001 cri.go:89] found id: ""
	I0520 11:37:38.674516   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:38.674576   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.679092   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.683085   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:38.683139   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:38.731146   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:38.731170   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:38.731176   49001 cri.go:89] found id: ""
	I0520 11:37:38.731183   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:38.731242   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.735726   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.740093   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:38.740157   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:38.787213   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:38.787242   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:38.787248   49001 cri.go:89] found id: ""
	I0520 11:37:38.787257   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:38.787315   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.791653   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.795708   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:38.795779   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:38.834309   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:38.834338   49001 cri.go:89] found id: ""
	I0520 11:37:38.834347   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:38.834403   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.839590   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:38.839651   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:38.880056   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:38.880078   49001 cri.go:89] found id: ""
	I0520 11:37:38.880085   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:38.880137   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.884462   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:38.884517   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:38.925598   49001 cri.go:89] found id: ""
	I0520 11:37:38.925620   49001 logs.go:276] 0 containers: []
	W0520 11:37:38.925629   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:38.925646   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:38.925662   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:38.967708   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:38.967742   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:39.004493   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:39.004528   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:39.043895   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:39.043926   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:39.089707   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:39.089738   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:39.142616   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:39.142647   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:39.243147   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:39.243180   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:39.293839   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:39.293876   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:39.329260   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:39Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:39Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:39.329282   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:39.329293   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:39.371235   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:39.371263   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:39.702279   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:39.702317   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:39.719489   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:39.719529   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:39.790443   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:39.790460   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:39.790472   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:39.864449   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:39.864483   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:40.074777   55177 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.364379197s)
	I0520 11:37:40.074800   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0520 11:37:40.074840   55177 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:37:40.074900   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:37:42.445256   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:42.459489   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:42.459560   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:42.505468   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:42.505491   49001 cri.go:89] found id: ""
	I0520 11:37:42.505500   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:42.505556   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.510073   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:42.510146   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:42.547578   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:42.547603   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:42.547608   49001 cri.go:89] found id: ""
	I0520 11:37:42.547616   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:42.547674   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.552444   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.557566   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:42.557635   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:42.603953   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:42.603977   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:42.603982   49001 cri.go:89] found id: ""
	I0520 11:37:42.603990   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:42.604045   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.609148   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.614267   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:42.614323   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:42.650755   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:42.650784   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:42.650790   49001 cri.go:89] found id: ""
	I0520 11:37:42.650804   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:42.650860   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.655292   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.659414   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:42.659490   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:42.701066   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:42.701089   49001 cri.go:89] found id: ""
	I0520 11:37:42.701098   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:42.701151   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.705572   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:42.705622   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:42.748408   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:42.748439   49001 cri.go:89] found id: ""
	I0520 11:37:42.748449   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:42.748504   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.753064   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:42.753138   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:42.793841   49001 cri.go:89] found id: ""
	I0520 11:37:42.793867   49001 logs.go:276] 0 containers: []
	W0520 11:37:42.793879   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:42.793896   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:42.793911   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:42.868634   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:42.868668   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:42.909759   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:42Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:42Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:42.909786   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:42.909800   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:42.957418   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:42.957450   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:43.032601   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:43.032630   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:43.032645   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:43.070087   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:43.070114   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:43.145890   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:43.145924   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:43.182905   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:43.182933   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:43.524243   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:43.524285   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:43.615647   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:43.615677   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:43.655032   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:43.655058   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:43.695013   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:43.695040   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:43.742525   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:43.742567   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:43.793492   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:43.793536   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:44.182697   55177 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.107772456s)
	I0520 11:37:44.182726   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0520 11:37:44.182751   55177 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:37:44.182803   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:37:44.828689   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 11:37:44.828729   55177 cache_images.go:123] Successfully loaded all cached images
	I0520 11:37:44.828736   55177 cache_images.go:92] duration metric: took 17.699229187s to LoadCachedImages
	I0520 11:37:44.828760   55177 kubeadm.go:928] updating node { 192.168.39.185 8443 v1.30.1 crio true true} ...
	I0520 11:37:44.828899   55177 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-814599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:37:44.828991   55177 ssh_runner.go:195] Run: crio config
	I0520 11:37:44.874986   55177 cni.go:84] Creating CNI manager for ""
	I0520 11:37:44.875006   55177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:37:44.875023   55177 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:37:44.875047   55177 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-814599 NodeName:no-preload-814599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:37:44.875178   55177 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-814599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:37:44.875243   55177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:37:44.884921   55177 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 11:37:44.884993   55177 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 11:37:44.894194   55177 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0520 11:37:44.894246   55177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:37:44.894261   55177 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 11:37:44.894340   55177 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0520 11:37:44.894349   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 11:37:44.894399   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 11:37:44.911262   55177 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 11:37:44.911298   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 11:37:44.911403   55177 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 11:37:44.911405   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 11:37:44.911439   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 11:37:44.927806   55177 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 11:37:44.927842   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 11:37:45.687562   55177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:37:45.698455   55177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 11:37:45.715540   55177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:37:45.732096   55177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0520 11:37:45.749226   55177 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0520 11:37:45.753251   55177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:37:45.765783   55177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:37:45.884722   55177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:37:45.901158   55177 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599 for IP: 192.168.39.185
	I0520 11:37:45.901184   55177 certs.go:194] generating shared ca certs ...
	I0520 11:37:45.901206   55177 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:45.901399   55177 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:37:45.901458   55177 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:37:45.901474   55177 certs.go:256] generating profile certs ...
	I0520 11:37:45.901537   55177 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.key
	I0520 11:37:45.901550   55177 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt with IP's: []
	I0520 11:37:45.963732   55177 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt ...
	I0520 11:37:45.963779   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: {Name:mk0cb8897bf8302b52dc7ca5046d539a70405121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:45.963960   55177 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.key ...
	I0520 11:37:45.963971   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.key: {Name:mkc542e91fa2ebd4a233ff7fd999cbdcec3b969d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:45.964047   55177 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5
	I0520 11:37:45.964063   55177 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt.d1d23eb5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185]
	I0520 11:37:46.111803   55177 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt.d1d23eb5 ...
	I0520 11:37:46.111832   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt.d1d23eb5: {Name:mk03571f628d50acb43d58fb9e6f8c79750d1719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:46.111987   55177 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5 ...
	I0520 11:37:46.112000   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5: {Name:mk134aa0713e2931956f241e87d95af2ee2ed3cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:46.112067   55177 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt.d1d23eb5 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt
	I0520 11:37:46.112166   55177 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key
	I0520 11:37:46.112228   55177 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key
	I0520 11:37:46.112245   55177 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt with IP's: []
	I0520 11:37:46.549891   55177 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt ...
	I0520 11:37:46.549916   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt: {Name:mk8ba6460ca87a91db3736a19ac0b4aa263186c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:46.550108   55177 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key ...
	I0520 11:37:46.550128   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key: {Name:mkb0abd6decbc99a654d27267c113d1ab1d3de2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:46.550328   55177 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:37:46.550364   55177 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:37:46.550375   55177 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:37:46.550397   55177 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:37:46.550420   55177 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:37:46.550442   55177 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:37:46.550477   55177 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:37:46.551044   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:37:46.585010   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:37:46.628269   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:37:46.660505   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:37:46.691056   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 11:37:46.717801   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:37:46.745010   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:37:46.771581   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:37:46.797609   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:37:46.824080   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:37:46.850011   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:37:46.876333   55177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:37:46.897181   55177 ssh_runner.go:195] Run: openssl version
	I0520 11:37:46.905468   55177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:37:46.918991   55177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:37:46.925511   55177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:37:46.925582   55177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:37:46.934013   55177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:37:46.948980   55177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:37:46.964110   55177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:37:46.970166   55177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:37:46.970217   55177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:37:46.976331   55177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:37:46.991532   55177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:37:47.005423   55177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:47.010919   55177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:47.010974   55177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:47.018789   55177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:37:47.030467   55177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:37:47.034911   55177 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 11:37:47.034975   55177 kubeadm.go:391] StartCluster: {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:37:47.035068   55177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:37:47.035119   55177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:37:47.080886   55177 cri.go:89] found id: ""
	I0520 11:37:47.080977   55177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 11:37:47.091749   55177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:37:47.102131   55177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:37:47.112035   55177 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:37:47.112054   55177 kubeadm.go:156] found existing configuration files:
	
	I0520 11:37:47.112094   55177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:37:47.121196   55177 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:37:47.121234   55177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:37:47.130723   55177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:37:47.139592   55177 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:37:47.139630   55177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:37:47.149295   55177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:37:47.158474   55177 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:37:47.158518   55177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:37:47.168002   55177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:37:47.179568   55177 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:37:47.179614   55177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:37:47.191617   55177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:37:47.255028   55177 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 11:37:47.255305   55177 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:37:47.416469   55177 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:37:47.416613   55177 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:37:47.416736   55177 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:37:47.650205   55177 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:37:47.652566   55177 out.go:204]   - Generating certificates and keys ...
	I0520 11:37:47.652662   55177 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:37:47.652761   55177 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:37:47.718358   55177 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 11:37:46.309055   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:46.322358   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:46.322414   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:46.361895   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:46.361917   49001 cri.go:89] found id: ""
	I0520 11:37:46.361926   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:46.361982   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.366364   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:46.366430   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:46.407171   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:46.407195   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:46.407201   49001 cri.go:89] found id: ""
	I0520 11:37:46.407209   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:46.407268   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.411308   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.414901   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:46.414961   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:46.449327   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:46.449354   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:46.449360   49001 cri.go:89] found id: ""
	I0520 11:37:46.449367   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:46.449420   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.453472   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.457223   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:46.457279   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:46.491554   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:46.491576   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:46.491581   49001 cri.go:89] found id: ""
	I0520 11:37:46.491590   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:46.491642   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.495599   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.499467   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:46.499525   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:46.533561   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:46.533582   49001 cri.go:89] found id: ""
	I0520 11:37:46.533591   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:46.533642   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.537534   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:46.537584   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:46.576563   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:46.576585   49001 cri.go:89] found id: ""
	I0520 11:37:46.576592   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:46.576635   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.582100   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:46.582156   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:46.620799   49001 cri.go:89] found id: ""
	I0520 11:37:46.620841   49001 logs.go:276] 0 containers: []
	W0520 11:37:46.620853   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:46.620871   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:46.620888   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:46.703641   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:46.703668   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:46.703679   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:46.748549   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:46.748583   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:46.784693   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:46Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:46Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:46.784717   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:46.784733   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:46.864481   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:46.864510   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:46.904736   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:46.904770   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:47.252609   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:47.252642   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:47.380977   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:47.381012   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:47.399130   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:47.399164   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:47.439506   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:47.439539   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:47.486345   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:47.486374   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:47.527931   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:47.527958   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:47.569521   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:47.569555   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:47.635780   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:47.635814   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:50.179381   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:50.193636   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:50.193693   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:50.232908   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:50.232952   49001 cri.go:89] found id: ""
	I0520 11:37:50.232961   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:50.233017   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.237261   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:50.237315   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:50.284220   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:50.284243   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:50.284249   49001 cri.go:89] found id: ""
	I0520 11:37:50.284258   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:50.284318   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.288607   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.292550   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:50.292601   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:50.335296   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:50.335318   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:50.335324   49001 cri.go:89] found id: ""
	I0520 11:37:50.335331   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:50.335388   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.339529   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.343452   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:50.343512   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:50.382033   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:50.382055   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:50.382069   49001 cri.go:89] found id: ""
	I0520 11:37:50.382077   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:50.382132   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.386218   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.390003   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:50.390063   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:50.430907   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:50.430931   49001 cri.go:89] found id: ""
	I0520 11:37:50.430943   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:50.430997   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.435375   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:50.435447   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:50.479010   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:50.479037   49001 cri.go:89] found id: ""
	I0520 11:37:50.479052   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:50.479114   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.483423   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:50.483497   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:50.524866   49001 cri.go:89] found id: ""
	I0520 11:37:50.524890   49001 logs.go:276] 0 containers: []
	W0520 11:37:50.524900   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:50.524917   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:50.524943   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:50.539631   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:50.539660   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:50.576035   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:50.576072   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:50.608587   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:50.608612   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:50.645338   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:50Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:50Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:50.645357   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:50.645369   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:50.738685   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:50.738718   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:48.006405   55177 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 11:37:48.441893   55177 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 11:37:48.539041   55177 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 11:37:48.662501   55177 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 11:37:48.662876   55177 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-814599] and IPs [192.168.39.185 127.0.0.1 ::1]
	I0520 11:37:48.806531   55177 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 11:37:48.806692   55177 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-814599] and IPs [192.168.39.185 127.0.0.1 ::1]
	I0520 11:37:48.864031   55177 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 11:37:49.122739   55177 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 11:37:49.274395   55177 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 11:37:49.274523   55177 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:37:49.766189   55177 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:37:49.981913   55177 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 11:37:50.714947   55177 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:37:50.974451   55177 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:37:51.258349   55177 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:37:51.259230   55177 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:37:51.262545   55177 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:37:51.286615   55177 out.go:204]   - Booting up control plane ...
	I0520 11:37:51.286763   55177 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:37:51.286893   55177 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:37:51.287001   55177 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:37:51.287179   55177 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:37:51.287292   55177 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:37:51.287355   55177 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:37:51.434555   55177 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 11:37:51.434684   55177 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 11:37:51.936163   55177 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.795085ms
	I0520 11:37:51.936240   55177 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 11:37:51.088429   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:51.088475   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:51.131476   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:51.131512   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:51.201646   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:51.201668   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:51.201680   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:51.238758   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:51.238790   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:51.337664   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:51.337706   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:51.405571   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:51.405601   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:51.450727   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:51.450756   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:51.501451   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:51.501478   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:54.046167   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:54.063445   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:54.063518   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:54.100590   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:54.100612   49001 cri.go:89] found id: ""
	I0520 11:37:54.100622   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:54.100681   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.105187   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:54.105255   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:54.141556   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:54.141579   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:54.141583   49001 cri.go:89] found id: ""
	I0520 11:37:54.141590   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:54.141645   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.145756   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.149760   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:54.149822   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:54.184636   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:54.184669   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:54.184675   49001 cri.go:89] found id: ""
	I0520 11:37:54.184684   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:54.184744   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.188820   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.192631   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:54.192703   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:54.228234   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:54.228260   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:54.228266   49001 cri.go:89] found id: ""
	I0520 11:37:54.228274   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:54.228328   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.232901   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.237012   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:54.237079   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:54.277191   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:54.277214   49001 cri.go:89] found id: ""
	I0520 11:37:54.277223   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:54.277276   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.281629   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:54.281711   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:54.325617   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:54.325642   49001 cri.go:89] found id: ""
	I0520 11:37:54.325652   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:54.325705   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.330858   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:54.330928   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:54.372269   49001 cri.go:89] found id: ""
	I0520 11:37:54.372301   49001 logs.go:276] 0 containers: []
	W0520 11:37:54.372311   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:54.372328   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:54.372345   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:54.452222   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:54.452255   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:54.492984   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:54.493012   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:54.827685   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:54.827732   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:54.934080   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:54.934116   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:55.000134   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:55.000162   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:55.000174   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:55.039134   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:55.039162   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:55.081373   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:55.081414   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:55.116827   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:55.116861   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:55.155511   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:55.155539   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:55.190401   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:55Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:55Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:55.190429   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:55.190444   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:55.205815   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:55.205855   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:55.281362   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:55.281394   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:55.321027   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:55.321062   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:57.437144   55177 kubeadm.go:309] [api-check] The API server is healthy after 5.501950158s
	I0520 11:37:57.449888   55177 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 11:37:57.467181   55177 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 11:37:57.489712   55177 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 11:37:57.489928   55177 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-814599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 11:37:57.502473   55177 kubeadm.go:309] [bootstrap-token] Using token: txomnq.xx9lvay4m5g9anm2
	I0520 11:37:57.504043   55177 out.go:204]   - Configuring RBAC rules ...
	I0520 11:37:57.504207   55177 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 11:37:57.512460   55177 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 11:37:57.519408   55177 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 11:37:57.522560   55177 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 11:37:57.525953   55177 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 11:37:57.529180   55177 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 11:37:57.843667   55177 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 11:37:58.286964   55177 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 11:37:58.844540   55177 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 11:37:58.845755   55177 kubeadm.go:309] 
	I0520 11:37:58.845844   55177 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 11:37:58.845855   55177 kubeadm.go:309] 
	I0520 11:37:58.845990   55177 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 11:37:58.845999   55177 kubeadm.go:309] 
	I0520 11:37:58.846036   55177 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 11:37:58.846123   55177 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 11:37:58.846217   55177 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 11:37:58.846239   55177 kubeadm.go:309] 
	I0520 11:37:58.846326   55177 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 11:37:58.846336   55177 kubeadm.go:309] 
	I0520 11:37:58.846408   55177 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 11:37:58.846429   55177 kubeadm.go:309] 
	I0520 11:37:58.846518   55177 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 11:37:58.846619   55177 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 11:37:58.846711   55177 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 11:37:58.846720   55177 kubeadm.go:309] 
	I0520 11:37:58.846819   55177 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 11:37:58.846936   55177 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 11:37:58.846947   55177 kubeadm.go:309] 
	I0520 11:37:58.847042   55177 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token txomnq.xx9lvay4m5g9anm2 \
	I0520 11:37:58.847175   55177 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 11:37:58.847207   55177 kubeadm.go:309] 	--control-plane 
	I0520 11:37:58.847217   55177 kubeadm.go:309] 
	I0520 11:37:58.847324   55177 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 11:37:58.847335   55177 kubeadm.go:309] 
	I0520 11:37:58.847432   55177 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token txomnq.xx9lvay4m5g9anm2 \
	I0520 11:37:58.847545   55177 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 11:37:58.847994   55177 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:37:58.848101   55177 cni.go:84] Creating CNI manager for ""
	I0520 11:37:58.848116   55177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:37:58.849751   55177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:37:57.875532   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:57.890122   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:57.890194   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:57.929792   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:57.929823   49001 cri.go:89] found id: ""
	I0520 11:37:57.929834   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:57.929912   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:57.934207   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:57.934271   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:57.976109   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:57.976134   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:57.976139   49001 cri.go:89] found id: ""
	I0520 11:37:57.976148   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:57.976206   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:57.980680   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:57.985469   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:57.985515   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:58.029316   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:58.029344   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:58.029349   49001 cri.go:89] found id: ""
	I0520 11:37:58.029357   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:58.029414   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:58.033846   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:58.038670   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:58.038728   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:58.081009   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:58.081031   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:58.081036   49001 cri.go:89] found id: ""
	I0520 11:37:58.081044   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:58.081107   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:58.085879   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:58.090280   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:58.090345   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:58.134174   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:58.134200   49001 cri.go:89] found id: ""
	I0520 11:37:58.134211   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:58.134270   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:58.138842   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:58.138920   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:58.181832   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:58.181853   49001 cri.go:89] found id: ""
	I0520 11:37:58.181860   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:58.181904   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:58.186287   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:58.186356   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:58.225130   49001 cri.go:89] found id: ""
	I0520 11:37:58.225160   49001 logs.go:276] 0 containers: []
	W0520 11:37:58.225171   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:58.225185   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:58.225202   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:58.239115   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:58.239151   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:58.293415   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:58.293447   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:58.345194   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:58.345219   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:58.461100   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:58.461134   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:58.530268   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:58.530292   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:58.530307   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:58.579308   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:58.579334   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:58.616046   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:58Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:58Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:58.616063   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:58.616076   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:58.936534   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:58.936565   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:58.982759   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:58.982785   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:59.027540   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:59.027568   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:59.094665   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:59.094692   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:59.171686   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:59.171716   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:59.211960   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:59.211991   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:58.850979   55177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:37:58.863166   55177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:37:58.887505   55177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:37:58.887640   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:37:58.887640   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-814599 minikube.k8s.io/updated_at=2024_05_20T11_37_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=no-preload-814599 minikube.k8s.io/primary=true
	I0520 11:37:59.072977   55177 ops.go:34] apiserver oom_adj: -16
	I0520 11:37:59.073140   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:37:59.574135   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:00.074127   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:00.573722   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:01.073970   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:01.574131   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:02.073146   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:02.574121   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:01.755545   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:01.770871   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:38:01.770946   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:38:01.808952   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:01.808981   49001 cri.go:89] found id: ""
	I0520 11:38:01.808990   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:38:01.809053   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.813167   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:38:01.813238   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:38:01.847015   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:01.847038   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:01.847043   49001 cri.go:89] found id: ""
	I0520 11:38:01.847050   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:38:01.847103   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.851340   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.855148   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:38:01.855192   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:38:01.888084   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:01.888103   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:38:01.888106   49001 cri.go:89] found id: ""
	I0520 11:38:01.888112   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:38:01.888159   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.892130   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.895885   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:38:01.895948   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:38:01.931943   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:01.931965   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:01.931970   49001 cri.go:89] found id: ""
	I0520 11:38:01.931979   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:38:01.932032   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.936969   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.940913   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:38:01.940994   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:38:01.975734   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:01.975755   49001 cri.go:89] found id: ""
	I0520 11:38:01.975762   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:38:01.975818   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.980258   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:38:01.980308   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:38:02.020165   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:02.020180   49001 cri.go:89] found id: ""
	I0520 11:38:02.020187   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:38:02.020231   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:02.024833   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:38:02.024898   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:38:02.073708   49001 cri.go:89] found id: ""
	I0520 11:38:02.073726   49001 logs.go:276] 0 containers: []
	W0520 11:38:02.073733   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:38:02.073748   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:38:02.073764   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:38:02.369408   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:38:02.369450   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:38:02.418311   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:38:02.418340   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:02.451908   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:38:02.451940   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:02.489458   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:38:02.489484   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:02.553002   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:38:02.553029   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:02.594301   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:38:02.594329   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:02.634342   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:38:02.634366   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:02.672411   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:38:02.672442   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:02.712054   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:38:02.712083   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:02.784091   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:38:02.784121   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:38:02.884619   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:38:02.884654   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:38:02.899557   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:38:02.899582   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:38:02.969559   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:38:02.969581   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:38:02.969595   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:38:03.005807   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:38:02Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:38:02Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:38:05.506332   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:05.520379   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:38:05.520453   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:38:05.554156   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:05.554181   49001 cri.go:89] found id: ""
	I0520 11:38:05.554190   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:38:05.554246   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.558685   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:38:05.558758   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:38:05.600568   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:05.600593   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:05.600598   49001 cri.go:89] found id: ""
	I0520 11:38:05.600607   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:38:05.600653   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.605799   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.611071   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:38:05.611136   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:38:05.650467   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:05.650492   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:38:05.650497   49001 cri.go:89] found id: ""
	I0520 11:38:05.650504   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:38:05.650551   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.655176   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.659748   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:38:05.659821   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:38:05.695623   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:05.695643   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:05.695647   49001 cri.go:89] found id: ""
	I0520 11:38:05.695654   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:38:05.695701   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.700108   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.704252   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:38:05.704322   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:38:05.739656   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:05.739680   49001 cri.go:89] found id: ""
	I0520 11:38:05.739688   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:38:05.739744   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.743870   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:38:05.743933   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:38:05.785143   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:05.785169   49001 cri.go:89] found id: ""
	I0520 11:38:05.785177   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:38:05.785236   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.789604   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:38:05.789663   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:38:05.828131   49001 cri.go:89] found id: ""
	I0520 11:38:05.828158   49001 logs.go:276] 0 containers: []
	W0520 11:38:05.828168   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:38:05.828186   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:38:05.828202   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:38:05.842034   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:38:05.842058   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:03.074134   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:03.574137   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:04.073720   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:04.574192   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:05.073829   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:05.573879   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:06.073515   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:06.574110   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:07.073907   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:07.573773   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:05.875216   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:38:05.877160   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:38:05.971422   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:38:05.971460   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:06.015904   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:38:06.015934   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:06.052145   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:38:06.052174   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:38:06.094286   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:38:06Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:38:06Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:38:06.094315   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:38:06.094331   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:06.178181   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:38:06.178213   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:38:06.227395   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:38:06.227423   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:38:06.291628   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:38:06.291649   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:38:06.291661   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:06.334195   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:38:06.334223   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:06.374025   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:38:06.374053   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:06.448074   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:38:06.448115   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:06.485428   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:38:06.485457   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:38:09.288254   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:09.304075   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:38:09.304135   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:38:09.343973   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:09.343997   49001 cri.go:89] found id: ""
	I0520 11:38:09.344004   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:38:09.344056   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.348468   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:38:09.348534   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:38:09.387379   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:09.387406   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:09.387411   49001 cri.go:89] found id: ""
	I0520 11:38:09.387421   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:38:09.387480   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.392029   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.396158   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:38:09.396206   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:38:09.432485   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:09.432503   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:38:09.432506   49001 cri.go:89] found id: ""
	I0520 11:38:09.432512   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:38:09.432558   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.436751   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.441079   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:38:09.441127   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:38:09.493660   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:09.493683   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:09.493689   49001 cri.go:89] found id: ""
	I0520 11:38:09.493697   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:38:09.493757   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.498119   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.502318   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:38:09.502374   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:38:09.544814   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:09.544836   49001 cri.go:89] found id: ""
	I0520 11:38:09.544844   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:38:09.544897   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.549900   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:38:09.549969   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:38:09.593250   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:09.593274   49001 cri.go:89] found id: ""
	I0520 11:38:09.593282   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:38:09.593336   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.597853   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:38:09.597910   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:38:09.638338   49001 cri.go:89] found id: ""
	I0520 11:38:09.638363   49001 logs.go:276] 0 containers: []
	W0520 11:38:09.638372   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:38:09.638391   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:38:09.638408   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:38:09.708117   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:38:09.708140   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:38:09.708152   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:38:09.754468   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:38:09.754500   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:09.819363   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:38:09.819406   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:38:09.857032   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:38:09Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:38:09Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:38:09.857060   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:38:09.857075   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:09.894161   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:38:09.894189   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:09.936809   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:38:09.936893   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:38:10.045129   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:38:10.045158   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:38:10.060573   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:38:10.060599   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:10.107107   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:38:10.107139   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:10.158773   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:38:10.158800   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:10.207133   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:38:10.207161   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:10.249976   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:38:10.250002   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:10.333791   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:38:10.333823   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:38:08.073446   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:08.573368   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:09.073630   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:09.574030   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:10.073838   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:10.573821   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:11.073290   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:11.573985   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:11.662312   55177 kubeadm.go:1107] duration metric: took 12.774738407s to wait for elevateKubeSystemPrivileges
	W0520 11:38:11.662350   55177 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 11:38:11.662358   55177 kubeadm.go:393] duration metric: took 24.6273899s to StartCluster
	I0520 11:38:11.662374   55177 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:38:11.662435   55177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:38:11.663629   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:38:11.663853   55177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 11:38:11.663853   55177 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:38:11.665528   55177 out.go:177] * Verifying Kubernetes components...
	I0520 11:38:11.663958   55177 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:38:11.666849   55177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:38:11.665561   55177 addons.go:69] Setting storage-provisioner=true in profile "no-preload-814599"
	I0520 11:38:11.666952   55177 addons.go:234] Setting addon storage-provisioner=true in "no-preload-814599"
	I0520 11:38:11.664050   55177 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:38:11.665569   55177 addons.go:69] Setting default-storageclass=true in profile "no-preload-814599"
	I0520 11:38:11.667114   55177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-814599"
	I0520 11:38:11.667002   55177 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:38:11.667546   55177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:38:11.667570   55177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:38:11.667573   55177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:38:11.667600   55177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:38:11.682729   55177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0520 11:38:11.682802   55177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I0520 11:38:11.683238   55177 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:38:11.683247   55177 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:38:11.683729   55177 main.go:141] libmachine: Using API Version  1
	I0520 11:38:11.683744   55177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:38:11.683886   55177 main.go:141] libmachine: Using API Version  1
	I0520 11:38:11.683913   55177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:38:11.684162   55177 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:38:11.684228   55177 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:38:11.684376   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:38:11.684794   55177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:38:11.684831   55177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:38:11.687840   55177 addons.go:234] Setting addon default-storageclass=true in "no-preload-814599"
	I0520 11:38:11.687882   55177 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:38:11.688229   55177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:38:11.688258   55177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:38:11.700574   55177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40975
	I0520 11:38:11.700996   55177 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:38:11.701527   55177 main.go:141] libmachine: Using API Version  1
	I0520 11:38:11.701555   55177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:38:11.701879   55177 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:38:11.702091   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:38:11.703051   55177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38261
	I0520 11:38:11.703377   55177 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:38:11.703745   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:38:11.705546   55177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:38:11.704248   55177 main.go:141] libmachine: Using API Version  1
	I0520 11:38:11.706748   55177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:38:11.706871   55177 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:38:11.706888   55177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:38:11.706907   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:38:11.707128   55177 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:38:11.707692   55177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:38:11.707716   55177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:38:11.709974   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:38:11.710410   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:38:11.710437   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:38:11.710587   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:38:11.710782   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:38:11.710941   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:38:11.711099   55177 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:38:11.722861   55177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0520 11:38:11.723238   55177 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:38:11.723627   55177 main.go:141] libmachine: Using API Version  1
	I0520 11:38:11.723641   55177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:38:11.723929   55177 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:38:11.724089   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:38:11.725813   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:38:11.726117   55177 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:38:11.726135   55177 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:38:11.726152   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:38:11.728786   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:38:11.729324   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:38:11.729347   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:38:11.729487   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:38:11.729647   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:38:11.729802   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:38:11.729968   55177 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:38:11.934267   55177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:38:11.934815   55177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 11:38:11.970939   55177 node_ready.go:35] waiting up to 6m0s for node "no-preload-814599" to be "Ready" ...
	I0520 11:38:11.986808   55177 node_ready.go:49] node "no-preload-814599" has status "Ready":"True"
	I0520 11:38:11.986838   55177 node_ready.go:38] duration metric: took 15.864359ms for node "no-preload-814599" to be "Ready" ...
	I0520 11:38:11.986849   55177 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:38:11.996124   55177 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.012434   55177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:38:12.023869   55177 pod_ready.go:92] pod "etcd-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:12.023896   55177 pod_ready.go:81] duration metric: took 27.747206ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.023910   55177 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.090125   55177 pod_ready.go:92] pod "kube-apiserver-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:12.090154   55177 pod_ready.go:81] duration metric: took 66.236019ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.090167   55177 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.129141   55177 pod_ready.go:92] pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:12.129171   55177 pod_ready.go:81] duration metric: took 38.996502ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.129185   55177 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.146582   55177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:38:12.702736   55177 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 11:38:13.186343   55177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.039726061s)
	I0520 11:38:13.186348   55177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.173874154s)
	I0520 11:38:13.186441   55177 main.go:141] libmachine: Making call to close driver server
	I0520 11:38:13.186406   55177 main.go:141] libmachine: Making call to close driver server
	I0520 11:38:13.186483   55177 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:38:13.186460   55177 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:38:13.186851   55177 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:38:13.186869   55177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:38:13.186857   55177 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:38:13.186880   55177 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:38:13.186890   55177 main.go:141] libmachine: Making call to close driver server
	I0520 11:38:13.186892   55177 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:38:13.186898   55177 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:38:13.186906   55177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:38:13.186939   55177 main.go:141] libmachine: Making call to close driver server
	I0520 11:38:13.186955   55177 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:38:13.187404   55177 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:38:13.187433   55177 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:38:13.187430   55177 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:38:13.187441   55177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:38:13.188481   55177 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:38:13.188499   55177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:38:13.216531   55177 main.go:141] libmachine: Making call to close driver server
	I0520 11:38:13.216550   55177 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:38:13.216869   55177 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:38:13.216900   55177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:38:13.218600   55177 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 11:38:13.217156   55177 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-814599" context rescaled to 1 replicas
	I0520 11:38:13.220482   55177 addons.go:505] duration metric: took 1.556523033s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 11:38:13.648055   55177 pod_ready.go:92] pod "kube-proxy-mqg49" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:13.648083   55177 pod_ready.go:81] duration metric: took 1.518890702s for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:13.648096   55177 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:13.672171   55177 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:13.672198   55177 pod_ready.go:81] duration metric: took 24.088741ms for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:13.672209   55177 pod_ready.go:38] duration metric: took 1.685347105s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:38:13.672227   55177 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:38:13.672288   55177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:13.707243   55177 api_server.go:72] duration metric: took 2.043355375s to wait for apiserver process to appear ...
	I0520 11:38:13.707273   55177 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:38:13.707306   55177 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:38:13.714628   55177 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:38:13.716833   55177 api_server.go:141] control plane version: v1.30.1
	I0520 11:38:13.716855   55177 api_server.go:131] duration metric: took 9.574901ms to wait for apiserver health ...
	I0520 11:38:13.716861   55177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:38:13.780609   55177 system_pods.go:59] 8 kube-system pods found
	I0520 11:38:13.780651   55177 system_pods.go:61] "coredns-7db6d8ff4d-2j2br" [2d6df494-bcbc-4dcd-9d36-c15cdd5bb905] Failed / Ready:PodFailed / ContainersReady:PodFailed
	I0520 11:38:13.780662   55177 system_pods.go:61] "coredns-7db6d8ff4d-tg5bl" [ec723a9c-933c-4d25-9b56-e39a78adbbb3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:38:13.780670   55177 system_pods.go:61] "etcd-no-preload-814599" [d0e5ca15-cc81-4dc2-9bc4-0e9b88c7b49d] Running
	I0520 11:38:13.780678   55177 system_pods.go:61] "kube-apiserver-no-preload-814599" [81f50182-5190-4168-b97a-f5a7a6348247] Running
	I0520 11:38:13.780683   55177 system_pods.go:61] "kube-controller-manager-no-preload-814599" [871504b1-6e24-498b-9563-241b7a7422d2] Running
	I0520 11:38:13.780688   55177 system_pods.go:61] "kube-proxy-mqg49" [0f3b3b30-9789-4a0b-b636-e6db16e1ce0d] Running
	I0520 11:38:13.780698   55177 system_pods.go:61] "kube-scheduler-no-preload-814599" [028942b5-5187-4b86-84ba-eba2cdb7a844] Running
	I0520 11:38:13.780705   55177 system_pods.go:61] "storage-provisioner" [84c706cd-0245-4afa-bb7b-4644bd2784bd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 11:38:13.780716   55177 system_pods.go:74] duration metric: took 63.849014ms to wait for pod list to return data ...
	I0520 11:38:13.780733   55177 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:38:13.974558   55177 default_sa.go:45] found service account: "default"
	I0520 11:38:13.974583   55177 default_sa.go:55] duration metric: took 193.840535ms for default service account to be created ...
	I0520 11:38:13.974592   55177 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:38:14.179931   55177 system_pods.go:86] 8 kube-system pods found
	I0520 11:38:14.179964   55177 system_pods.go:89] "coredns-7db6d8ff4d-2j2br" [2d6df494-bcbc-4dcd-9d36-c15cdd5bb905] Failed / Ready:PodFailed / ContainersReady:PodFailed
	I0520 11:38:14.179975   55177 system_pods.go:89] "coredns-7db6d8ff4d-tg5bl" [ec723a9c-933c-4d25-9b56-e39a78adbbb3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:38:14.179982   55177 system_pods.go:89] "etcd-no-preload-814599" [d0e5ca15-cc81-4dc2-9bc4-0e9b88c7b49d] Running
	I0520 11:38:14.179991   55177 system_pods.go:89] "kube-apiserver-no-preload-814599" [81f50182-5190-4168-b97a-f5a7a6348247] Running
	I0520 11:38:14.179997   55177 system_pods.go:89] "kube-controller-manager-no-preload-814599" [871504b1-6e24-498b-9563-241b7a7422d2] Running
	I0520 11:38:14.180004   55177 system_pods.go:89] "kube-proxy-mqg49" [0f3b3b30-9789-4a0b-b636-e6db16e1ce0d] Running
	I0520 11:38:14.180010   55177 system_pods.go:89] "kube-scheduler-no-preload-814599" [028942b5-5187-4b86-84ba-eba2cdb7a844] Running
	I0520 11:38:14.180022   55177 system_pods.go:89] "storage-provisioner" [84c706cd-0245-4afa-bb7b-4644bd2784bd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 11:38:14.180051   55177 retry.go:31] will retry after 205.867365ms: missing components: kube-dns
	I0520 11:38:14.394051   55177 system_pods.go:86] 7 kube-system pods found
	I0520 11:38:14.394095   55177 system_pods.go:89] "coredns-7db6d8ff4d-tg5bl" [ec723a9c-933c-4d25-9b56-e39a78adbbb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:38:14.394106   55177 system_pods.go:89] "etcd-no-preload-814599" [d0e5ca15-cc81-4dc2-9bc4-0e9b88c7b49d] Running
	I0520 11:38:14.394114   55177 system_pods.go:89] "kube-apiserver-no-preload-814599" [81f50182-5190-4168-b97a-f5a7a6348247] Running
	I0520 11:38:14.394123   55177 system_pods.go:89] "kube-controller-manager-no-preload-814599" [871504b1-6e24-498b-9563-241b7a7422d2] Running
	I0520 11:38:14.394129   55177 system_pods.go:89] "kube-proxy-mqg49" [0f3b3b30-9789-4a0b-b636-e6db16e1ce0d] Running
	I0520 11:38:14.394135   55177 system_pods.go:89] "kube-scheduler-no-preload-814599" [028942b5-5187-4b86-84ba-eba2cdb7a844] Running
	I0520 11:38:14.394140   55177 system_pods.go:89] "storage-provisioner" [84c706cd-0245-4afa-bb7b-4644bd2784bd] Running
	I0520 11:38:14.394150   55177 system_pods.go:126] duration metric: took 419.551262ms to wait for k8s-apps to be running ...
	I0520 11:38:14.394171   55177 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:38:14.394227   55177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:38:14.410212   55177 system_svc.go:56] duration metric: took 16.035102ms WaitForService to wait for kubelet
	I0520 11:38:14.410243   55177 kubeadm.go:576] duration metric: took 2.746359236s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:38:14.410267   55177 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:38:14.574827   55177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:38:14.574868   55177 node_conditions.go:123] node cpu capacity is 2
	I0520 11:38:14.574896   55177 node_conditions.go:105] duration metric: took 164.622789ms to run NodePressure ...
	I0520 11:38:14.574910   55177 start.go:240] waiting for startup goroutines ...
	I0520 11:38:14.574921   55177 start.go:245] waiting for cluster config update ...
	I0520 11:38:14.574936   55177 start.go:254] writing updated cluster config ...
	I0520 11:38:14.575308   55177 ssh_runner.go:195] Run: rm -f paused
	I0520 11:38:14.624181   55177 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:38:14.626334   55177 out.go:177] * Done! kubectl is now configured to use "no-preload-814599" cluster and "default" namespace by default
	I0520 11:38:13.140283   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:13.159225   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:38:13.159298   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:38:13.202120   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:13.202143   49001 cri.go:89] found id: ""
	I0520 11:38:13.202153   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:38:13.202208   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.207060   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:38:13.207131   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:38:13.247442   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:13.247469   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:13.247476   49001 cri.go:89] found id: ""
	I0520 11:38:13.247484   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:38:13.247544   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.253117   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.257685   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:38:13.257760   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:38:13.295709   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:13.295733   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:38:13.295738   49001 cri.go:89] found id: ""
	I0520 11:38:13.295746   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:38:13.295808   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.300226   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.305016   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:38:13.305089   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:38:13.351175   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:13.351199   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:13.351205   49001 cri.go:89] found id: ""
	I0520 11:38:13.351213   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:38:13.351265   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.355878   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.360181   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:38:13.360251   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:38:13.404715   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:13.404740   49001 cri.go:89] found id: ""
	I0520 11:38:13.404750   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:38:13.404807   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.409346   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:38:13.409406   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:38:13.445545   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:13.445570   49001 cri.go:89] found id: ""
	I0520 11:38:13.445578   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:38:13.445639   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.450746   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:38:13.450816   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:38:13.491412   49001 cri.go:89] found id: ""
	I0520 11:38:13.491436   49001 logs.go:276] 0 containers: []
	W0520 11:38:13.491445   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:38:13.491458   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:38:13.491470   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:13.540230   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:38:13.540271   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:13.650145   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:38:13.650183   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:13.695626   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:38:13.695673   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:13.734914   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:38:13.734938   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:38:13.749048   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:38:13.749078   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:13.798905   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:38:13.798946   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:13.870877   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:38:13.870910   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:13.908078   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:38:13.908112   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:38:14.008196   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:38:14.008229   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:38:14.078476   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:38:14.078502   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:38:14.078522   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:38:14.116654   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:38:14Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:38:14Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:38:14.116677   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:38:14.116693   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:14.153652   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:38:14.153678   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:38:14.461830   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:38:14.461873   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:38:17.021128   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:17.035181   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:38:17.035265   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:38:17.072505   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:17.072538   49001 cri.go:89] found id: ""
	I0520 11:38:17.072547   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:38:17.072605   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.076869   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:38:17.076937   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:38:17.114526   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:17.114546   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:17.114549   49001 cri.go:89] found id: ""
	I0520 11:38:17.114556   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:38:17.114599   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.118935   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.123340   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:38:17.123400   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:38:17.162519   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:17.162542   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:38:17.162548   49001 cri.go:89] found id: ""
	I0520 11:38:17.162556   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:38:17.162609   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.167204   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.171366   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:38:17.171419   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:38:17.209885   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:17.209910   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:17.209916   49001 cri.go:89] found id: ""
	I0520 11:38:17.209924   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:38:17.209983   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.214346   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.218379   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:38:17.218454   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:38:17.256838   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:17.256861   49001 cri.go:89] found id: ""
	I0520 11:38:17.256869   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:38:17.256939   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.261405   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:38:17.261469   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:38:17.300854   49001 cri.go:89] found id: "8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb"
	I0520 11:38:17.300873   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:17.300879   49001 cri.go:89] found id: ""
	I0520 11:38:17.300888   49001 logs.go:276] 2 containers: [8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:38:17.300966   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.305449   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.310191   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:38:17.310235   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:38:17.347442   49001 cri.go:89] found id: ""
	I0520 11:38:17.347472   49001 logs.go:276] 0 containers: []
	W0520 11:38:17.347482   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:38:17.347499   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:38:17.347516   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:17.423945   49001 logs.go:123] Gathering logs for kube-controller-manager [8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb] ...
	I0520 11:38:17.423978   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb"
	I0520 11:38:17.462895   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:38:17.462917   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:17.501643   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:38:17.501668   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:17.571073   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:38:17.571105   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:17.619999   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:38:17.620025   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:38:17.667926   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:38:17Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:38:17Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:38:17.667958   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:38:17.667973   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:17.709635   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:38:17.709666   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:38:18.008685   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:38:18.008721   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:38:18.027721   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:38:18.027757   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:38:18.101182   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:38:18.101207   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:38:18.101222   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:18.140099   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:38:18.140136   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:38:18.194360   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:38:18.194388   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:38:18.294219   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:38:18.294254   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:18.335701   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:38:18.335725   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:20.886589   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:20.901878   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:38:20.901934   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:38:20.944886   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:20.944915   49001 cri.go:89] found id: ""
	I0520 11:38:20.944939   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:38:20.945012   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:20.949447   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:38:20.949516   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:38:20.983674   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:20.983699   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:20.983705   49001 cri.go:89] found id: ""
	I0520 11:38:20.983714   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:38:20.983770   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:20.988081   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:20.992237   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:38:20.992297   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:38:21.032367   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:21.032392   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:38:21.032398   49001 cri.go:89] found id: ""
	I0520 11:38:21.032406   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:38:21.032462   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.036703   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.040505   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:38:21.040560   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:38:21.077869   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:21.077891   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:21.077895   49001 cri.go:89] found id: ""
	I0520 11:38:21.077912   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:38:21.077971   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.082081   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.086005   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:38:21.086067   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:38:21.119906   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:21.119931   49001 cri.go:89] found id: ""
	I0520 11:38:21.119941   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:38:21.120008   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.124462   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:38:21.124525   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:38:21.168036   49001 cri.go:89] found id: "8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb"
	I0520 11:38:21.168057   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:21.168061   49001 cri.go:89] found id: ""
	I0520 11:38:21.168067   49001 logs.go:276] 2 containers: [8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:38:21.168130   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.173361   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.178007   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:38:21.178086   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:38:21.218589   49001 cri.go:89] found id: ""
	I0520 11:38:21.218613   49001 logs.go:276] 0 containers: []
	W0520 11:38:21.218621   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:38:21.218636   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:38:21.218646   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:38:21.259467   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:38:21.259496   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:38:21.332306   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:38:21.332323   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:38:21.332334   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:21.421544   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:38:21.421592   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:21.471622   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:38:21.471649   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:38:21.513493   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:38:21Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:38:21Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:38:21.513516   49001 logs.go:123] Gathering logs for kube-controller-manager [8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb] ...
	I0520 11:38:21.513533   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb"
	I0520 11:38:21.554897   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:38:21.554925   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:21.589547   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:38:21.589575   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:38:21.883651   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:38:21.883690   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:21.953715   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:38:21.953746   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:22.000910   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:38:22.000946   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:22.045360   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:38:22.045411   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:22.085436   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:38:22.085467   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:22.124882   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:38:22.124914   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:38:22.223409   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:38:22.223442   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:38:24.738469   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:24.752248   49001 kubeadm.go:591] duration metric: took 4m2.032144588s to restartPrimaryControlPlane
	W0520 11:38:24.752331   49001 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:38:24.752364   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:38:30.503490   49001 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.751107158s)
	I0520 11:38:30.503563   49001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:38:30.518745   49001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:38:30.530630   49001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:38:30.541997   49001 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:38:30.542012   49001 kubeadm.go:156] found existing configuration files:
	
	I0520 11:38:30.542047   49001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:38:30.552537   49001 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:38:30.552578   49001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:38:30.563869   49001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:38:30.590860   49001 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:38:30.590917   49001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:38:30.601141   49001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:38:30.610476   49001 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:38:30.610526   49001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:38:30.619912   49001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:38:30.629267   49001 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:38:30.629324   49001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:38:30.638628   49001 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:38:30.822466   49001 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:38:38.725380   49001 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 11:38:38.725431   49001 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:38:38.725509   49001 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:38:38.725657   49001 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:38:38.725803   49001 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:38:38.725901   49001 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:38:38.727789   49001 out.go:204]   - Generating certificates and keys ...
	I0520 11:38:38.727871   49001 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:38:38.727933   49001 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:38:38.728050   49001 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:38:38.728146   49001 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:38:38.728260   49001 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:38:38.728363   49001 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:38:38.728443   49001 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:38:38.728523   49001 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:38:38.728618   49001 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:38:38.728728   49001 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:38:38.728771   49001 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:38:38.728826   49001 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:38:38.728872   49001 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:38:38.728965   49001 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 11:38:38.729016   49001 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:38:38.729070   49001 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:38:38.729170   49001 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:38:38.729295   49001 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:38:38.729384   49001 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:38:38.730955   49001 out.go:204]   - Booting up control plane ...
	I0520 11:38:38.731059   49001 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:38:38.731146   49001 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:38:38.731229   49001 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:38:38.731361   49001 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:38:38.731457   49001 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:38:38.731498   49001 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:38:38.731646   49001 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 11:38:38.731753   49001 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 11:38:38.731837   49001 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 503.840079ms
	I0520 11:38:38.731899   49001 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 11:38:38.731959   49001 kubeadm.go:309] [api-check] The API server is healthy after 5.002265792s
	I0520 11:38:38.732104   49001 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 11:38:38.732262   49001 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 11:38:38.732337   49001 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 11:38:38.732588   49001 kubeadm.go:309] [mark-control-plane] Marking the node pause-120159 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 11:38:38.732673   49001 kubeadm.go:309] [bootstrap-token] Using token: 6sxsyt.uubjd8d69g0boyj1
	I0520 11:38:38.734835   49001 out.go:204]   - Configuring RBAC rules ...
	I0520 11:38:38.734934   49001 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 11:38:38.735038   49001 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 11:38:38.735193   49001 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 11:38:38.735339   49001 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 11:38:38.735476   49001 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 11:38:38.735580   49001 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 11:38:38.735715   49001 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 11:38:38.735784   49001 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 11:38:38.735867   49001 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 11:38:38.735880   49001 kubeadm.go:309] 
	I0520 11:38:38.735965   49001 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 11:38:38.735973   49001 kubeadm.go:309] 
	I0520 11:38:38.736073   49001 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 11:38:38.736086   49001 kubeadm.go:309] 
	I0520 11:38:38.736120   49001 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 11:38:38.736175   49001 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 11:38:38.736251   49001 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 11:38:38.736265   49001 kubeadm.go:309] 
	I0520 11:38:38.736340   49001 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 11:38:38.736350   49001 kubeadm.go:309] 
	I0520 11:38:38.736424   49001 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 11:38:38.736438   49001 kubeadm.go:309] 
	I0520 11:38:38.736532   49001 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 11:38:38.736628   49001 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 11:38:38.736720   49001 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 11:38:38.736727   49001 kubeadm.go:309] 
	I0520 11:38:38.736844   49001 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 11:38:38.736913   49001 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 11:38:38.736919   49001 kubeadm.go:309] 
	I0520 11:38:38.737027   49001 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6sxsyt.uubjd8d69g0boyj1 \
	I0520 11:38:38.737161   49001 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 11:38:38.737186   49001 kubeadm.go:309] 	--control-plane 
	I0520 11:38:38.737191   49001 kubeadm.go:309] 
	I0520 11:38:38.737278   49001 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 11:38:38.737286   49001 kubeadm.go:309] 
	I0520 11:38:38.737351   49001 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6sxsyt.uubjd8d69g0boyj1 \
	I0520 11:38:38.737451   49001 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 11:38:38.737461   49001 cni.go:84] Creating CNI manager for ""
	I0520 11:38:38.737467   49001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:38:38.738864   49001 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:38:38.740207   49001 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:38:38.752858   49001 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:38:38.778813   49001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:38:38.778886   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:38.778915   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-120159 minikube.k8s.io/updated_at=2024_05_20T11_38_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=pause-120159 minikube.k8s.io/primary=true
	I0520 11:38:38.911412   49001 ops.go:34] apiserver oom_adj: -16
	I0520 11:38:38.911459   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:39.411546   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:39.911560   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:40.411480   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:40.912372   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:41.411567   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:41.912007   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:42.411473   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:42.911855   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:43.412327   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:43.911823   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:44.411514   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:44.911954   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:45.412331   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:45.911784   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:46.411858   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:46.912159   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:47.412449   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:47.911600   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:48.412431   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:48.911489   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:49.411661   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:49.911564   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:50.412078   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:50.911705   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:51.411618   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:51.912458   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:52.051101   49001 kubeadm.go:1107] duration metric: took 13.272266313s to wait for elevateKubeSystemPrivileges
	W0520 11:38:52.051148   49001 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 11:38:52.051159   49001 kubeadm.go:393] duration metric: took 4m29.435831598s to StartCluster
	I0520 11:38:52.051186   49001 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:38:52.051291   49001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:38:52.052252   49001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:38:52.052473   49001 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:38:52.053805   49001 out.go:177] * Verifying Kubernetes components...
	I0520 11:38:52.052570   49001 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:38:52.052699   49001 config.go:182] Loaded profile config "pause-120159": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:38:52.055113   49001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:38:52.056657   49001 out.go:177] * Enabled addons: 
	I0520 11:38:52.058676   49001 addons.go:505] duration metric: took 6.103036ms for enable addons: enabled=[]
	I0520 11:38:52.241611   49001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:38:52.303879   49001 node_ready.go:35] waiting up to 6m0s for node "pause-120159" to be "Ready" ...
	I0520 11:38:52.313545   49001 node_ready.go:49] node "pause-120159" has status "Ready":"True"
	I0520 11:38:52.313565   49001 node_ready.go:38] duration metric: took 9.653609ms for node "pause-120159" to be "Ready" ...
	I0520 11:38:52.313573   49001 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:38:52.321530   49001 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-k4bcq" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.344982   49001 pod_ready.go:92] pod "coredns-7db6d8ff4d-k4bcq" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:53.345013   49001 pod_ready.go:81] duration metric: took 1.023459579s for pod "coredns-7db6d8ff4d-k4bcq" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.345022   49001 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n2sf5" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.361942   49001 pod_ready.go:92] pod "coredns-7db6d8ff4d-n2sf5" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:53.361966   49001 pod_ready.go:81] duration metric: took 16.937707ms for pod "coredns-7db6d8ff4d-n2sf5" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.361978   49001 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.375098   49001 pod_ready.go:92] pod "etcd-pause-120159" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:53.375124   49001 pod_ready.go:81] duration metric: took 13.139918ms for pod "etcd-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.375136   49001 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.380057   49001 pod_ready.go:92] pod "kube-apiserver-pause-120159" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:53.380074   49001 pod_ready.go:81] duration metric: took 4.930933ms for pod "kube-apiserver-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.380083   49001 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.507519   49001 pod_ready.go:92] pod "kube-controller-manager-pause-120159" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:53.507541   49001 pod_ready.go:81] duration metric: took 127.451914ms for pod "kube-controller-manager-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.507550   49001 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-28q5d" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.907081   49001 pod_ready.go:92] pod "kube-proxy-28q5d" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:53.907104   49001 pod_ready.go:81] duration metric: took 399.548035ms for pod "kube-proxy-28q5d" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.907113   49001 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:54.307668   49001 pod_ready.go:92] pod "kube-scheduler-pause-120159" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:54.307689   49001 pod_ready.go:81] duration metric: took 400.570058ms for pod "kube-scheduler-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:54.307696   49001 pod_ready.go:38] duration metric: took 1.994114326s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:38:54.307709   49001 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:38:54.307762   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:54.323119   49001 api_server.go:72] duration metric: took 2.270617012s to wait for apiserver process to appear ...
	I0520 11:38:54.323138   49001 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:38:54.323151   49001 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0520 11:38:54.328102   49001 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0520 11:38:54.329086   49001 api_server.go:141] control plane version: v1.30.1
	I0520 11:38:54.329108   49001 api_server.go:131] duration metric: took 5.96558ms to wait for apiserver health ...
	I0520 11:38:54.329115   49001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:38:54.509891   49001 system_pods.go:59] 7 kube-system pods found
	I0520 11:38:54.509920   49001 system_pods.go:61] "coredns-7db6d8ff4d-k4bcq" [60a35b25-1142-4747-9d9e-8b681f1b0a4f] Running
	I0520 11:38:54.509928   49001 system_pods.go:61] "coredns-7db6d8ff4d-n2sf5" [5aa9b938-1a9f-4278-80c6-ede8bffac8c5] Running
	I0520 11:38:54.509933   49001 system_pods.go:61] "etcd-pause-120159" [756cc6fb-de0b-4a0a-9af3-ea55481a963a] Running
	I0520 11:38:54.509938   49001 system_pods.go:61] "kube-apiserver-pause-120159" [f5e46550-70fa-4227-81c0-c48f3de2633e] Running
	I0520 11:38:54.509946   49001 system_pods.go:61] "kube-controller-manager-pause-120159" [3cf37af2-b9c6-4cbe-8a2e-6faab220311a] Running
	I0520 11:38:54.509951   49001 system_pods.go:61] "kube-proxy-28q5d" [eb2d6eed-e833-4f59-b070-efe984dd2d32] Running
	I0520 11:38:54.509957   49001 system_pods.go:61] "kube-scheduler-pause-120159" [c1329c69-f100-4b6d-bac9-207b6ed018e1] Running
	I0520 11:38:54.509963   49001 system_pods.go:74] duration metric: took 180.842613ms to wait for pod list to return data ...
	I0520 11:38:54.509973   49001 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:38:54.706824   49001 default_sa.go:45] found service account: "default"
	I0520 11:38:54.706848   49001 default_sa.go:55] duration metric: took 196.867998ms for default service account to be created ...
	I0520 11:38:54.706857   49001 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:38:54.910080   49001 system_pods.go:86] 7 kube-system pods found
	I0520 11:38:54.910110   49001 system_pods.go:89] "coredns-7db6d8ff4d-k4bcq" [60a35b25-1142-4747-9d9e-8b681f1b0a4f] Running
	I0520 11:38:54.910118   49001 system_pods.go:89] "coredns-7db6d8ff4d-n2sf5" [5aa9b938-1a9f-4278-80c6-ede8bffac8c5] Running
	I0520 11:38:54.910126   49001 system_pods.go:89] "etcd-pause-120159" [756cc6fb-de0b-4a0a-9af3-ea55481a963a] Running
	I0520 11:38:54.910131   49001 system_pods.go:89] "kube-apiserver-pause-120159" [f5e46550-70fa-4227-81c0-c48f3de2633e] Running
	I0520 11:38:54.910136   49001 system_pods.go:89] "kube-controller-manager-pause-120159" [3cf37af2-b9c6-4cbe-8a2e-6faab220311a] Running
	I0520 11:38:54.910142   49001 system_pods.go:89] "kube-proxy-28q5d" [eb2d6eed-e833-4f59-b070-efe984dd2d32] Running
	I0520 11:38:54.910148   49001 system_pods.go:89] "kube-scheduler-pause-120159" [c1329c69-f100-4b6d-bac9-207b6ed018e1] Running
	I0520 11:38:54.910156   49001 system_pods.go:126] duration metric: took 203.292455ms to wait for k8s-apps to be running ...
	I0520 11:38:54.910169   49001 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:38:54.910218   49001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:38:54.925529   49001 system_svc.go:56] duration metric: took 15.353979ms WaitForService to wait for kubelet
	I0520 11:38:54.925556   49001 kubeadm.go:576] duration metric: took 2.873055164s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:38:54.925577   49001 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:38:55.108320   49001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:38:55.108351   49001 node_conditions.go:123] node cpu capacity is 2
	I0520 11:38:55.108363   49001 node_conditions.go:105] duration metric: took 182.781309ms to run NodePressure ...
	I0520 11:38:55.108373   49001 start.go:240] waiting for startup goroutines ...
	I0520 11:38:55.108380   49001 start.go:245] waiting for cluster config update ...
	I0520 11:38:55.108386   49001 start.go:254] writing updated cluster config ...
	I0520 11:38:55.108713   49001 ssh_runner.go:195] Run: rm -f paused
	I0520 11:38:55.157080   49001 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:38:55.159084   49001 out.go:177] * Done! kubectl is now configured to use "pause-120159" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 11:38:55 pause-120159 crio[3036]: time="2024-05-20 11:38:55.983387083Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b3228c3-f545-4e3e-8435-357f3fdacd82 name=/runtime.v1.RuntimeService/Version
	May 20 11:38:55 pause-120159 crio[3036]: time="2024-05-20 11:38:55.986173618Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10b5e6a4-a597-4714-a1b3-2a501ba6f00f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:38:55 pause-120159 crio[3036]: time="2024-05-20 11:38:55.986579422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716205135986555012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10b5e6a4-a597-4714-a1b3-2a501ba6f00f name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:38:55 pause-120159 crio[3036]: time="2024-05-20 11:38:55.987162084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc114341-5988-4eba-a593-76702dc58d7e name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:55 pause-120159 crio[3036]: time="2024-05-20 11:38:55.987260099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc114341-5988-4eba-a593-76702dc58d7e name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:55 pause-120159 crio[3036]: time="2024-05-20 11:38:55.987423264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e335c47534890e23571d7440b24e963fd620030d54cb4db5400afecf1addf93d,PodSandboxId:89a544ecbe8e4fdea02b26bfcc490d7f140bf261bfcc3f699362dcb5b6860699,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132834128897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2sf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa9b938-1a9f-4278-80c6-ede8bffac8c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2f6931db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec137482565e4512e0f765fd430fb83485095a578f3a824d15fbeb64ce9073c2,PodSandboxId:592a5391281f429bbe833da693c5f91b247269b73a890ca8380df9131ad2d357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132769253556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4bcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 60a35b25-1142-4747-9d9e-8b681f1b0a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 14d424e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d33a1d4f982352ad0352681804eb3ac481c2455abe6657e1d92c0f7854824,PodSandboxId:5d41ec5fcf693950d9978fffac20811f459038bcb7b7f5c91fe25de9073f4d97,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,Cr
eatedAt:1716205132355699439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28q5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb2d6eed-e833-4f59-b070-efe984dd2d32,},Annotations:map[string]string{io.kubernetes.container.hash: be94c29a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f0d694e4ee089a2f98e661ffc57342adb95ebabb63e3ccb504e8bd1a7c998d,PodSandboxId:aacce476a8d206f9a2dc18415c4ac149ca23b39b37fe1c8588d235f0b21ae174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205112917773630,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf36b6031f0be4f966b49f3302053ddc,},Annotations:map[string]string{io.kubernetes.container.hash: 73bb4454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be213ff811e3c54c1dfe90935bbc09061eb48e357e42756b906c66f098f8cb0d,PodSandboxId:c13f04966d6e69cecca9b7bd1524b297d367c8a5c8351e33784c25ceb3820b7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205112900691806,Labels:map
[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3625f506094e6e70bf9a09a5ff7fab,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5629e67d15fc9fad15493ce58ef39d4eb148508d51681bfa344c9a2c444a08,PodSandboxId:890cc247570bf71be06e5555d0c05fc3aad5bc31c5eb51a3877e53411bcd9e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205112825864423,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1abdb2a9421c1c51820204e59e8987bc,},Annotations:map[string]string{io.kubernetes.container.hash: cfad84f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f15f10b6f508dfb823ceb93c39c1a03b4b411112c84a1efb2ed7dc107a935cb,PodSandboxId:148e0732394f4cb2b73f9709eb6156799b6d8650504c4d8b109198433e034b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205112802952114,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7220233a7b29136ea4685855ec2b5f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc114341-5988-4eba-a593-76702dc58d7e name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.028061961Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f3b3477-a5e8-492f-b2da-e393ff66fd20 name=/runtime.v1.RuntimeService/Version
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.028184379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f3b3477-a5e8-492f-b2da-e393ff66fd20 name=/runtime.v1.RuntimeService/Version
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.029476116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2075504f-1fe7-4022-a740-8da94c4c539c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.030025765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716205136030004816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2075504f-1fe7-4022-a740-8da94c4c539c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.030879712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76fce039-e075-4050-805e-092537c0b23e name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.030947270Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76fce039-e075-4050-805e-092537c0b23e name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.031108830Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e335c47534890e23571d7440b24e963fd620030d54cb4db5400afecf1addf93d,PodSandboxId:89a544ecbe8e4fdea02b26bfcc490d7f140bf261bfcc3f699362dcb5b6860699,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132834128897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2sf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa9b938-1a9f-4278-80c6-ede8bffac8c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2f6931db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec137482565e4512e0f765fd430fb83485095a578f3a824d15fbeb64ce9073c2,PodSandboxId:592a5391281f429bbe833da693c5f91b247269b73a890ca8380df9131ad2d357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132769253556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4bcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 60a35b25-1142-4747-9d9e-8b681f1b0a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 14d424e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d33a1d4f982352ad0352681804eb3ac481c2455abe6657e1d92c0f7854824,PodSandboxId:5d41ec5fcf693950d9978fffac20811f459038bcb7b7f5c91fe25de9073f4d97,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,Cr
eatedAt:1716205132355699439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28q5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb2d6eed-e833-4f59-b070-efe984dd2d32,},Annotations:map[string]string{io.kubernetes.container.hash: be94c29a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f0d694e4ee089a2f98e661ffc57342adb95ebabb63e3ccb504e8bd1a7c998d,PodSandboxId:aacce476a8d206f9a2dc18415c4ac149ca23b39b37fe1c8588d235f0b21ae174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205112917773630,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf36b6031f0be4f966b49f3302053ddc,},Annotations:map[string]string{io.kubernetes.container.hash: 73bb4454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be213ff811e3c54c1dfe90935bbc09061eb48e357e42756b906c66f098f8cb0d,PodSandboxId:c13f04966d6e69cecca9b7bd1524b297d367c8a5c8351e33784c25ceb3820b7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205112900691806,Labels:map
[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3625f506094e6e70bf9a09a5ff7fab,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5629e67d15fc9fad15493ce58ef39d4eb148508d51681bfa344c9a2c444a08,PodSandboxId:890cc247570bf71be06e5555d0c05fc3aad5bc31c5eb51a3877e53411bcd9e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205112825864423,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1abdb2a9421c1c51820204e59e8987bc,},Annotations:map[string]string{io.kubernetes.container.hash: cfad84f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f15f10b6f508dfb823ceb93c39c1a03b4b411112c84a1efb2ed7dc107a935cb,PodSandboxId:148e0732394f4cb2b73f9709eb6156799b6d8650504c4d8b109198433e034b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205112802952114,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7220233a7b29136ea4685855ec2b5f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76fce039-e075-4050-805e-092537c0b23e name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.073414741Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fd1769b-ff05-4495-ae9d-097dd4ee4efc name=/runtime.v1.RuntimeService/Version
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.073507879Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fd1769b-ff05-4495-ae9d-097dd4ee4efc name=/runtime.v1.RuntimeService/Version
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.074543028Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=826f7918-b230-462f-8064-d3d1b7579bd0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.075050922Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716205136075026695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=826f7918-b230-462f-8064-d3d1b7579bd0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.075635344Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31e7c2bd-478d-4277-bdfb-3597b42b7b89 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.075692605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31e7c2bd-478d-4277-bdfb-3597b42b7b89 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.075856321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e335c47534890e23571d7440b24e963fd620030d54cb4db5400afecf1addf93d,PodSandboxId:89a544ecbe8e4fdea02b26bfcc490d7f140bf261bfcc3f699362dcb5b6860699,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132834128897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2sf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa9b938-1a9f-4278-80c6-ede8bffac8c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2f6931db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec137482565e4512e0f765fd430fb83485095a578f3a824d15fbeb64ce9073c2,PodSandboxId:592a5391281f429bbe833da693c5f91b247269b73a890ca8380df9131ad2d357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132769253556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4bcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 60a35b25-1142-4747-9d9e-8b681f1b0a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 14d424e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d33a1d4f982352ad0352681804eb3ac481c2455abe6657e1d92c0f7854824,PodSandboxId:5d41ec5fcf693950d9978fffac20811f459038bcb7b7f5c91fe25de9073f4d97,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,Cr
eatedAt:1716205132355699439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28q5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb2d6eed-e833-4f59-b070-efe984dd2d32,},Annotations:map[string]string{io.kubernetes.container.hash: be94c29a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f0d694e4ee089a2f98e661ffc57342adb95ebabb63e3ccb504e8bd1a7c998d,PodSandboxId:aacce476a8d206f9a2dc18415c4ac149ca23b39b37fe1c8588d235f0b21ae174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205112917773630,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf36b6031f0be4f966b49f3302053ddc,},Annotations:map[string]string{io.kubernetes.container.hash: 73bb4454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be213ff811e3c54c1dfe90935bbc09061eb48e357e42756b906c66f098f8cb0d,PodSandboxId:c13f04966d6e69cecca9b7bd1524b297d367c8a5c8351e33784c25ceb3820b7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205112900691806,Labels:map
[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3625f506094e6e70bf9a09a5ff7fab,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5629e67d15fc9fad15493ce58ef39d4eb148508d51681bfa344c9a2c444a08,PodSandboxId:890cc247570bf71be06e5555d0c05fc3aad5bc31c5eb51a3877e53411bcd9e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205112825864423,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1abdb2a9421c1c51820204e59e8987bc,},Annotations:map[string]string{io.kubernetes.container.hash: cfad84f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f15f10b6f508dfb823ceb93c39c1a03b4b411112c84a1efb2ed7dc107a935cb,PodSandboxId:148e0732394f4cb2b73f9709eb6156799b6d8650504c4d8b109198433e034b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205112802952114,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7220233a7b29136ea4685855ec2b5f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31e7c2bd-478d-4277-bdfb-3597b42b7b89 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.087802426Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e2caf21-d918-455f-89a0-49b1d22b2f08 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.087970390Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:89a544ecbe8e4fdea02b26bfcc490d7f140bf261bfcc3f699362dcb5b6860699,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-n2sf5,Uid:5aa9b938-1a9f-4278-80c6-ede8bffac8c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205132444421995,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2sf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa9b938-1a9f-4278-80c6-ede8bffac8c5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T11:38:52.135661657Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:592a5391281f429bbe833da693c5f91b247269b73a890ca8380df9131ad2d357,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-k4bcq,Uid:60a35b25-1142-4747-9d9e-8b681f1b0a4f,Namespace:kube-system
,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205132376019809,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4bcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60a35b25-1142-4747-9d9e-8b681f1b0a4f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T11:38:52.065792534Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d41ec5fcf693950d9978fffac20811f459038bcb7b7f5c91fe25de9073f4d97,Metadata:&PodSandboxMetadata{Name:kube-proxy-28q5d,Uid:eb2d6eed-e833-4f59-b070-efe984dd2d32,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205132230960302,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-28q5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb2d6eed-e833-4f59-b070-efe984dd2d32,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[st
ring]string{kubernetes.io/config.seen: 2024-05-20T11:38:51.921100727Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:148e0732394f4cb2b73f9709eb6156799b6d8650504c4d8b109198433e034b25,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-120159,Uid:7220233a7b29136ea4685855ec2b5f9e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205112642983147,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7220233a7b29136ea4685855ec2b5f9e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7220233a7b29136ea4685855ec2b5f9e,kubernetes.io/config.seen: 2024-05-20T11:38:32.196722505Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c13f04966d6e69cecca9b7bd1524b297d367c8a5c8351e33784c25ceb3820b7b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-120159,Uid:9a3625f506094e6e70bf9a09a
5ff7fab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205112642049696,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3625f506094e6e70bf9a09a5ff7fab,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9a3625f506094e6e70bf9a09a5ff7fab,kubernetes.io/config.seen: 2024-05-20T11:38:32.196728665Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aacce476a8d206f9a2dc18415c4ac149ca23b39b37fe1c8588d235f0b21ae174,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-120159,Uid:cf36b6031f0be4f966b49f3302053ddc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205112639929593,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: cf36b6031f0be4f966b49f3302053ddc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.7:8443,kubernetes.io/config.hash: cf36b6031f0be4f966b49f3302053ddc,kubernetes.io/config.seen: 2024-05-20T11:38:32.196727310Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:890cc247570bf71be06e5555d0c05fc3aad5bc31c5eb51a3877e53411bcd9e43,Metadata:&PodSandboxMetadata{Name:etcd-pause-120159,Uid:1abdb2a9421c1c51820204e59e8987bc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205112630841139,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1abdb2a9421c1c51820204e59e8987bc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.7:2379,kubernetes.io/config.hash: 1abdb2a9421c1c51820204e59e8987bc,kubernetes.io/config.
seen: 2024-05-20T11:38:32.196725945Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5e2caf21-d918-455f-89a0-49b1d22b2f08 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.088518366Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74527d7d-8d14-4307-8ebd-59a1dcb0e1ac name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.088633386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74527d7d-8d14-4307-8ebd-59a1dcb0e1ac name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:56 pause-120159 crio[3036]: time="2024-05-20 11:38:56.088972869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e335c47534890e23571d7440b24e963fd620030d54cb4db5400afecf1addf93d,PodSandboxId:89a544ecbe8e4fdea02b26bfcc490d7f140bf261bfcc3f699362dcb5b6860699,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132834128897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2sf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa9b938-1a9f-4278-80c6-ede8bffac8c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2f6931db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec137482565e4512e0f765fd430fb83485095a578f3a824d15fbeb64ce9073c2,PodSandboxId:592a5391281f429bbe833da693c5f91b247269b73a890ca8380df9131ad2d357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132769253556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4bcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 60a35b25-1142-4747-9d9e-8b681f1b0a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 14d424e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d33a1d4f982352ad0352681804eb3ac481c2455abe6657e1d92c0f7854824,PodSandboxId:5d41ec5fcf693950d9978fffac20811f459038bcb7b7f5c91fe25de9073f4d97,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,Cr
eatedAt:1716205132355699439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28q5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb2d6eed-e833-4f59-b070-efe984dd2d32,},Annotations:map[string]string{io.kubernetes.container.hash: be94c29a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f0d694e4ee089a2f98e661ffc57342adb95ebabb63e3ccb504e8bd1a7c998d,PodSandboxId:aacce476a8d206f9a2dc18415c4ac149ca23b39b37fe1c8588d235f0b21ae174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205112917773630,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf36b6031f0be4f966b49f3302053ddc,},Annotations:map[string]string{io.kubernetes.container.hash: 73bb4454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be213ff811e3c54c1dfe90935bbc09061eb48e357e42756b906c66f098f8cb0d,PodSandboxId:c13f04966d6e69cecca9b7bd1524b297d367c8a5c8351e33784c25ceb3820b7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205112900691806,Labels:map
[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3625f506094e6e70bf9a09a5ff7fab,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5629e67d15fc9fad15493ce58ef39d4eb148508d51681bfa344c9a2c444a08,PodSandboxId:890cc247570bf71be06e5555d0c05fc3aad5bc31c5eb51a3877e53411bcd9e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205112825864423,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1abdb2a9421c1c51820204e59e8987bc,},Annotations:map[string]string{io.kubernetes.container.hash: cfad84f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f15f10b6f508dfb823ceb93c39c1a03b4b411112c84a1efb2ed7dc107a935cb,PodSandboxId:148e0732394f4cb2b73f9709eb6156799b6d8650504c4d8b109198433e034b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205112802952114,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7220233a7b29136ea4685855ec2b5f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74527d7d-8d14-4307-8ebd-59a1dcb0e1ac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e335c47534890       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   0                   89a544ecbe8e4       coredns-7db6d8ff4d-n2sf5
	ec137482565e4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   0                   592a5391281f4       coredns-7db6d8ff4d-k4bcq
	421d33a1d4f98       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   3 seconds ago       Running             kube-proxy                0                   5d41ec5fcf693       kube-proxy-28q5d
	d9f0d694e4ee0       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   23 seconds ago      Running             kube-apiserver            1                   aacce476a8d20       kube-apiserver-pause-120159
	be213ff811e3c       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   23 seconds ago      Running             kube-controller-manager   7                   c13f04966d6e6       kube-controller-manager-pause-120159
	df5629e67d15f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago      Running             etcd                      3                   890cc247570bf       etcd-pause-120159
	0f15f10b6f508       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   23 seconds ago      Running             kube-scheduler            3                   148e0732394f4       kube-scheduler-pause-120159
	
	
	==> coredns [e335c47534890e23571d7440b24e963fd620030d54cb4db5400afecf1addf93d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ec137482565e4512e0f765fd430fb83485095a578f3a824d15fbeb64ce9073c2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               pause-120159
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-120159
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=pause-120159
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T11_38_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:38:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-120159
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:38:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:38:38 +0000   Mon, 20 May 2024 11:38:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:38:38 +0000   Mon, 20 May 2024 11:38:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:38:38 +0000   Mon, 20 May 2024 11:38:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:38:38 +0000   Mon, 20 May 2024 11:38:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.7
	  Hostname:    pause-120159
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 7158a385ce02475b8afc41cda7c21255
	  System UUID:                7158a385-ce02-475b-8afc-41cda7c21255
	  Boot ID:                    30dd72df-d2bf-4cec-9531-c150555ff449
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-k4bcq                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4s
	  kube-system                 coredns-7db6d8ff4d-n2sf5                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4s
	  kube-system                 etcd-pause-120159                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         20s
	  kube-system                 kube-apiserver-pause-120159             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  kube-system                 kube-controller-manager-pause-120159    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  kube-system                 kube-proxy-28q5d                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-scheduler-pause-120159             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (12%!)(MISSING)  340Mi (17%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 3s    kube-proxy       
	  Normal  Starting                 18s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18s   kubelet          Node pause-120159 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s   kubelet          Node pause-120159 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s   kubelet          Node pause-120159 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s    node-controller  Node pause-120159 event: Registered Node pause-120159 in Controller
	
	
	==> dmesg <==
	[  +0.151039] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.309292] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.656743] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.076994] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.305407] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +1.122133] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.460122] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.093543] kauditd_printk_skb: 30 callbacks suppressed
	[ +14.528876] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.009833] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +6.969892] kauditd_printk_skb: 90 callbacks suppressed
	[  +6.455929] systemd-fstab-generator[2544]: Ignoring "noauto" option for root device
	[  +0.235337] systemd-fstab-generator[2637]: Ignoring "noauto" option for root device
	[  +0.486265] systemd-fstab-generator[2800]: Ignoring "noauto" option for root device
	[  +0.193781] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +0.432032] systemd-fstab-generator[2899]: Ignoring "noauto" option for root device
	[May20 11:34] systemd-fstab-generator[3159]: Ignoring "noauto" option for root device
	[  +0.102705] kauditd_printk_skb: 171 callbacks suppressed
	[  +2.015709] systemd-fstab-generator[3283]: Ignoring "noauto" option for root device
	[ +12.433356] kauditd_printk_skb: 74 callbacks suppressed
	[May20 11:38] systemd-fstab-generator[10569]: Ignoring "noauto" option for root device
	[  +6.063610] systemd-fstab-generator[10889]: Ignoring "noauto" option for root device
	[  +0.089784] kauditd_printk_skb: 75 callbacks suppressed
	[ +14.254898] systemd-fstab-generator[11114]: Ignoring "noauto" option for root device
	[  +0.102164] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [df5629e67d15fc9fad15493ce58ef39d4eb148508d51681bfa344c9a2c444a08] <==
	{"level":"info","ts":"2024-05-20T11:38:33.178381Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T11:38:33.178668Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"856b77cd5251110c","initial-advertise-peer-urls":["https://192.168.50.7:2380"],"listen-peer-urls":["https://192.168.50.7:2380"],"advertise-client-urls":["https://192.168.50.7:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.7:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T11:38:33.178717Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T11:38:33.178868Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-05-20T11:38:33.178893Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-05-20T11:38:33.181803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c switched to configuration voters=(9613909553285501196)"}
	{"level":"info","ts":"2024-05-20T11:38:33.182093Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b162f841703ff885","local-member-id":"856b77cd5251110c","added-peer-id":"856b77cd5251110c","added-peer-peer-urls":["https://192.168.50.7:2380"]}
	{"level":"info","ts":"2024-05-20T11:38:33.531676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-20T11:38:33.531786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-20T11:38:33.53184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgPreVoteResp from 856b77cd5251110c at term 1"}
	{"level":"info","ts":"2024-05-20T11:38:33.531879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T11:38:33.531904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgVoteResp from 856b77cd5251110c at term 2"}
	{"level":"info","ts":"2024-05-20T11:38:33.531931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became leader at term 2"}
	{"level":"info","ts":"2024-05-20T11:38:33.531956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 856b77cd5251110c elected leader 856b77cd5251110c at term 2"}
	{"level":"info","ts":"2024-05-20T11:38:33.535827Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"856b77cd5251110c","local-member-attributes":"{Name:pause-120159 ClientURLs:[https://192.168.50.7:2379]}","request-path":"/0/members/856b77cd5251110c/attributes","cluster-id":"b162f841703ff885","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T11:38:33.536039Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:38:33.536641Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:38:33.539032Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:38:33.544827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T11:38:33.5469Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b162f841703ff885","local-member-id":"856b77cd5251110c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:38:33.546975Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:38:33.547016Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:38:33.539982Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T11:38:33.547029Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T11:38:33.556333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.7:2379"}
	
	
	==> kernel <==
	 11:38:56 up 7 min,  0 users,  load average: 0.53, 0.53, 0.29
	Linux pause-120159 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d9f0d694e4ee089a2f98e661ffc57342adb95ebabb63e3ccb504e8bd1a7c998d] <==
	I0520 11:38:35.545818       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 11:38:35.545859       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 11:38:35.545885       1 cache.go:39] Caches are synced for autoregister controller
	I0520 11:38:35.546564       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 11:38:35.547346       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 11:38:35.548185       1 controller.go:615] quota admission added evaluator for: namespaces
	I0520 11:38:35.548994       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 11:38:35.549040       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 11:38:35.549351       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 11:38:35.594407       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 11:38:36.410851       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 11:38:36.416228       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 11:38:36.416316       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 11:38:36.954867       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 11:38:37.000766       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 11:38:37.144523       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0520 11:38:37.150875       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.7]
	I0520 11:38:37.151784       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 11:38:37.156420       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 11:38:37.465158       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 11:38:38.103139       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 11:38:38.142471       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 11:38:38.166522       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 11:38:51.866124       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0520 11:38:51.988465       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [be213ff811e3c54c1dfe90935bbc09061eb48e357e42756b906c66f098f8cb0d] <==
	I0520 11:38:51.833988       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-120159"
	I0520 11:38:51.834033       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0520 11:38:51.837679       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0520 11:38:51.882775       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0520 11:38:51.918837       1 shared_informer.go:320] Caches are synced for endpoint
	I0520 11:38:51.965397       1 shared_informer.go:320] Caches are synced for deployment
	I0520 11:38:51.965517       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0520 11:38:51.965560       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0520 11:38:51.970124       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0520 11:38:52.017030       1 shared_informer.go:320] Caches are synced for stateful set
	I0520 11:38:52.031229       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 11:38:52.063074       1 shared_informer.go:320] Caches are synced for disruption
	I0520 11:38:52.076268       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 11:38:52.154878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="151.328629ms"
	I0520 11:38:52.165024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="10.05078ms"
	I0520 11:38:52.169461       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="81.902µs"
	I0520 11:38:52.178752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.576µs"
	I0520 11:38:52.457200       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 11:38:52.514864       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 11:38:52.514888       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 11:38:53.217135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.294µs"
	I0520 11:38:53.311395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.532663ms"
	I0520 11:38:53.312743       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.878µs"
	I0520 11:38:53.331833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.699713ms"
	I0520 11:38:53.333159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.644µs"
	
	
	==> kube-proxy [421d33a1d4f982352ad0352681804eb3ac481c2455abe6657e1d92c0f7854824] <==
	I0520 11:38:52.596693       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:38:52.615110       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.7"]
	I0520 11:38:52.705758       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:38:52.705792       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:38:52.705807       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:38:52.708688       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:38:52.708880       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:38:52.708897       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:38:52.710430       1 config.go:192] "Starting service config controller"
	I0520 11:38:52.710448       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:38:52.710470       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:38:52.710473       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:38:52.711114       1 config.go:319] "Starting node config controller"
	I0520 11:38:52.711122       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:38:52.812997       1 shared_informer.go:320] Caches are synced for node config
	I0520 11:38:52.815730       1 shared_informer.go:320] Caches are synced for service config
	I0520 11:38:52.816041       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0f15f10b6f508dfb823ceb93c39c1a03b4b411112c84a1efb2ed7dc107a935cb] <==
	W0520 11:38:35.558893       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:38:35.559043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 11:38:35.559213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 11:38:35.559249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 11:38:35.559284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:38:35.559403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 11:38:35.559385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:38:35.559546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:38:36.482009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 11:38:36.482065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 11:38:36.482007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:38:36.482090       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:38:36.484244       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:38:36.484327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 11:38:36.493852       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 11:38:36.493902       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 11:38:36.494119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:38:36.494180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 11:38:36.629862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 11:38:36.629912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 11:38:36.674088       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 11:38:36.674212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 11:38:36.709357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 11:38:36.709470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0520 11:38:39.740526       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 11:38:38 pause-120159 kubelet[10896]: I0520 11:38:38.336950   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a3625f506094e6e70bf9a09a5ff7fab-ca-certs\") pod \"kube-controller-manager-pause-120159\" (UID: \"9a3625f506094e6e70bf9a09a5ff7fab\") " pod="kube-system/kube-controller-manager-pause-120159"
	May 20 11:38:38 pause-120159 kubelet[10896]: I0520 11:38:38.337002   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a3625f506094e6e70bf9a09a5ff7fab-k8s-certs\") pod \"kube-controller-manager-pause-120159\" (UID: \"9a3625f506094e6e70bf9a09a5ff7fab\") " pod="kube-system/kube-controller-manager-pause-120159"
	May 20 11:38:38 pause-120159 kubelet[10896]: I0520 11:38:38.337104   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a3625f506094e6e70bf9a09a5ff7fab-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-120159\" (UID: \"9a3625f506094e6e70bf9a09a5ff7fab\") " pod="kube-system/kube-controller-manager-pause-120159"
	May 20 11:38:39 pause-120159 kubelet[10896]: I0520 11:38:39.009331   10896 apiserver.go:52] "Watching apiserver"
	May 20 11:38:39 pause-120159 kubelet[10896]: I0520 11:38:39.035659   10896 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 11:38:39 pause-120159 kubelet[10896]: E0520 11:38:39.166962   10896 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-120159\" already exists" pod="kube-system/kube-scheduler-pause-120159"
	May 20 11:38:39 pause-120159 kubelet[10896]: E0520 11:38:39.172652   10896 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-120159\" already exists" pod="kube-system/kube-apiserver-pause-120159"
	May 20 11:38:39 pause-120159 kubelet[10896]: I0520 11:38:39.250032   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-120159" podStartSLOduration=1.2500004 podStartE2EDuration="1.2500004s" podCreationTimestamp="2024-05-20 11:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:39.228048013 +0000 UTC m=+1.316700045" watchObservedRunningTime="2024-05-20 11:38:39.2500004 +0000 UTC m=+1.338652428"
	May 20 11:38:39 pause-120159 kubelet[10896]: I0520 11:38:39.261836   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-120159" podStartSLOduration=1.261818431 podStartE2EDuration="1.261818431s" podCreationTimestamp="2024-05-20 11:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:39.251094904 +0000 UTC m=+1.339746947" watchObservedRunningTime="2024-05-20 11:38:39.261818431 +0000 UTC m=+1.350470460"
	May 20 11:38:39 pause-120159 kubelet[10896]: I0520 11:38:39.272852   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-120159" podStartSLOduration=3.272835086 podStartE2EDuration="3.272835086s" podCreationTimestamp="2024-05-20 11:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:39.262279739 +0000 UTC m=+1.350931771" watchObservedRunningTime="2024-05-20 11:38:39.272835086 +0000 UTC m=+1.361487110"
	May 20 11:38:39 pause-120159 kubelet[10896]: I0520 11:38:39.285050   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-120159" podStartSLOduration=1.285034108 podStartE2EDuration="1.285034108s" podCreationTimestamp="2024-05-20 11:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:39.273558779 +0000 UTC m=+1.362210812" watchObservedRunningTime="2024-05-20 11:38:39.285034108 +0000 UTC m=+1.373686140"
	May 20 11:38:51 pause-120159 kubelet[10896]: I0520 11:38:51.921180   10896 topology_manager.go:215] "Topology Admit Handler" podUID="eb2d6eed-e833-4f59-b070-efe984dd2d32" podNamespace="kube-system" podName="kube-proxy-28q5d"
	May 20 11:38:51 pause-120159 kubelet[10896]: I0520 11:38:51.936516   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2725\" (UniqueName: \"kubernetes.io/projected/eb2d6eed-e833-4f59-b070-efe984dd2d32-kube-api-access-k2725\") pod \"kube-proxy-28q5d\" (UID: \"eb2d6eed-e833-4f59-b070-efe984dd2d32\") " pod="kube-system/kube-proxy-28q5d"
	May 20 11:38:51 pause-120159 kubelet[10896]: I0520 11:38:51.936577   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb2d6eed-e833-4f59-b070-efe984dd2d32-xtables-lock\") pod \"kube-proxy-28q5d\" (UID: \"eb2d6eed-e833-4f59-b070-efe984dd2d32\") " pod="kube-system/kube-proxy-28q5d"
	May 20 11:38:51 pause-120159 kubelet[10896]: I0520 11:38:51.936641   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb2d6eed-e833-4f59-b070-efe984dd2d32-lib-modules\") pod \"kube-proxy-28q5d\" (UID: \"eb2d6eed-e833-4f59-b070-efe984dd2d32\") " pod="kube-system/kube-proxy-28q5d"
	May 20 11:38:51 pause-120159 kubelet[10896]: I0520 11:38:51.936659   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb2d6eed-e833-4f59-b070-efe984dd2d32-kube-proxy\") pod \"kube-proxy-28q5d\" (UID: \"eb2d6eed-e833-4f59-b070-efe984dd2d32\") " pod="kube-system/kube-proxy-28q5d"
	May 20 11:38:52 pause-120159 kubelet[10896]: I0520 11:38:52.066050   10896 topology_manager.go:215] "Topology Admit Handler" podUID="60a35b25-1142-4747-9d9e-8b681f1b0a4f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k4bcq"
	May 20 11:38:52 pause-120159 kubelet[10896]: I0520 11:38:52.135744   10896 topology_manager.go:215] "Topology Admit Handler" podUID="5aa9b938-1a9f-4278-80c6-ede8bffac8c5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n2sf5"
	May 20 11:38:52 pause-120159 kubelet[10896]: I0520 11:38:52.137756   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8fv8\" (UniqueName: \"kubernetes.io/projected/60a35b25-1142-4747-9d9e-8b681f1b0a4f-kube-api-access-m8fv8\") pod \"coredns-7db6d8ff4d-k4bcq\" (UID: \"60a35b25-1142-4747-9d9e-8b681f1b0a4f\") " pod="kube-system/coredns-7db6d8ff4d-k4bcq"
	May 20 11:38:52 pause-120159 kubelet[10896]: I0520 11:38:52.137805   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a35b25-1142-4747-9d9e-8b681f1b0a4f-config-volume\") pod \"coredns-7db6d8ff4d-k4bcq\" (UID: \"60a35b25-1142-4747-9d9e-8b681f1b0a4f\") " pod="kube-system/coredns-7db6d8ff4d-k4bcq"
	May 20 11:38:52 pause-120159 kubelet[10896]: I0520 11:38:52.238983   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa9b938-1a9f-4278-80c6-ede8bffac8c5-config-volume\") pod \"coredns-7db6d8ff4d-n2sf5\" (UID: \"5aa9b938-1a9f-4278-80c6-ede8bffac8c5\") " pod="kube-system/coredns-7db6d8ff4d-n2sf5"
	May 20 11:38:52 pause-120159 kubelet[10896]: I0520 11:38:52.239024   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kjkl\" (UniqueName: \"kubernetes.io/projected/5aa9b938-1a9f-4278-80c6-ede8bffac8c5-kube-api-access-4kjkl\") pod \"coredns-7db6d8ff4d-n2sf5\" (UID: \"5aa9b938-1a9f-4278-80c6-ede8bffac8c5\") " pod="kube-system/coredns-7db6d8ff4d-n2sf5"
	May 20 11:38:53 pause-120159 kubelet[10896]: I0520 11:38:53.253222   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-n2sf5" podStartSLOduration=1.253196096 podStartE2EDuration="1.253196096s" podCreationTimestamp="2024-05-20 11:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:53.218187369 +0000 UTC m=+15.306839400" watchObservedRunningTime="2024-05-20 11:38:53.253196096 +0000 UTC m=+15.341848128"
	May 20 11:38:53 pause-120159 kubelet[10896]: I0520 11:38:53.285500   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-28q5d" podStartSLOduration=2.2854812620000002 podStartE2EDuration="2.285481262s" podCreationTimestamp="2024-05-20 11:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:53.254036888 +0000 UTC m=+15.342688919" watchObservedRunningTime="2024-05-20 11:38:53.285481262 +0000 UTC m=+15.374133291"
	May 20 11:38:53 pause-120159 kubelet[10896]: I0520 11:38:53.317701   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-k4bcq" podStartSLOduration=1.31751994 podStartE2EDuration="1.31751994s" podCreationTimestamp="2024-05-20 11:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:53.286358007 +0000 UTC m=+15.375010039" watchObservedRunningTime="2024-05-20 11:38:53.31751994 +0000 UTC m=+15.406171974"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-120159 -n pause-120159
helpers_test.go:261: (dbg) Run:  kubectl --context pause-120159 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-120159 -n pause-120159
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-120159 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-120159 logs -n 25: (1.267944162s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-388098                                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:33 UTC | 20 May 24 11:33 UTC |
	| start   | -p NoKubernetes-388098                                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:33 UTC | 20 May 24 11:34 UTC |
	|         | --no-kubernetes --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-735967                           | force-systemd-env-735967  | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:34 UTC |
	| start   | -p force-systemd-flag-199580                          | force-systemd-flag-199580 | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:35 UTC |
	|         | --memory=2048 --force-systemd                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-388098 sudo                           | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:34 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-388098                                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:34 UTC |
	| start   | -p NoKubernetes-388098                                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:35 UTC |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-854547                          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:34 UTC |
	| start   | -p kubernetes-upgrade-854547                          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:34 UTC | 20 May 24 11:35 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-199580 ssh cat                     | force-systemd-flag-199580 | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:35 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-199580                          | force-systemd-flag-199580 | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:35 UTC |
	| start   | -p cert-expiration-355991                             | cert-expiration-355991    | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:36 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-388098 sudo                           | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:35 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-388098                                | NoKubernetes-388098       | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:35 UTC |
	| start   | -p cert-options-477107                                | cert-options-477107       | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:36 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-854547                          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:35 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-854547                          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:35 UTC | 20 May 24 11:36 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | cert-options-477107 ssh                               | cert-options-477107       | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-477107 -- sudo                        | cert-options-477107       | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-477107                                | cert-options-477107       | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	| start   | -p old-k8s-version-210420                             | old-k8s-version-210420    | jenkins | v1.33.1 | 20 May 24 11:36 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-854547                          | kubernetes-upgrade-854547 | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	| start   | -p no-preload-814599                                  | no-preload-814599         | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:38 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                          |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-814599            | no-preload-814599         | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-814599                                  | no-preload-814599         | jenkins | v1.33.1 | 20 May 24 11:38 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:36:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:36:57.740218   55177 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:36:57.740452   55177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:36:57.740460   55177 out.go:304] Setting ErrFile to fd 2...
	I0520 11:36:57.740464   55177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:36:57.740638   55177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:36:57.741232   55177 out.go:298] Setting JSON to false
	I0520 11:36:57.742142   55177 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4761,"bootTime":1716200257,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:36:57.742198   55177 start.go:139] virtualization: kvm guest
	I0520 11:36:57.744450   55177 out.go:177] * [no-preload-814599] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:36:57.745984   55177 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:36:57.746944   55177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:36:57.747962   55177 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:36:57.745964   55177 notify.go:220] Checking for updates...
	I0520 11:36:57.749984   55177 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:36:57.751225   55177 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:36:57.752394   55177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:36:57.753903   55177 config.go:182] Loaded profile config "cert-expiration-355991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:36:57.753997   55177 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:36:57.754099   55177 config.go:182] Loaded profile config "pause-120159": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:36:57.754163   55177 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:36:57.788611   55177 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 11:36:57.789885   55177 start.go:297] selected driver: kvm2
	I0520 11:36:57.789901   55177 start.go:901] validating driver "kvm2" against <nil>
	I0520 11:36:57.789911   55177 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:36:57.790632   55177 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.790715   55177 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:36:57.805273   55177 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:36:57.805308   55177 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 11:36:57.805578   55177 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:36:57.805646   55177 cni.go:84] Creating CNI manager for ""
	I0520 11:36:57.805659   55177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:36:57.805669   55177 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 11:36:57.805712   55177 start.go:340] cluster config:
	{Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:36:57.805826   55177 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.807505   55177 out.go:177] * Starting "no-preload-814599" primary control-plane node in "no-preload-814599" cluster
	I0520 11:36:58.463061   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.463494   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has current primary IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.463530   54732 main.go:141] libmachine: (old-k8s-version-210420) Found IP for machine: 192.168.83.100
	I0520 11:36:58.463539   54732 main.go:141] libmachine: (old-k8s-version-210420) Reserving static IP address...
	I0520 11:36:58.463857   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"} in network mk-old-k8s-version-210420
	I0520 11:36:58.538368   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Getting to WaitForSSH function...
	I0520 11:36:58.538404   54732 main.go:141] libmachine: (old-k8s-version-210420) Reserved static IP address: 192.168.83.100
	I0520 11:36:58.538417   54732 main.go:141] libmachine: (old-k8s-version-210420) Waiting for SSH to be available...
	I0520 11:36:58.540887   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.541361   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:58.541394   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.541531   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH client type: external
	I0520 11:36:58.541561   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa (-rw-------)
	I0520 11:36:58.541590   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:36:58.541614   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | About to run SSH command:
	I0520 11:36:58.541632   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | exit 0
	I0520 11:36:58.669046   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | SSH cmd err, output: <nil>: 
	I0520 11:36:58.669304   54732 main.go:141] libmachine: (old-k8s-version-210420) KVM machine creation complete!
	I0520 11:36:58.669649   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:36:58.670148   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:36:58.670359   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:36:58.670540   54732 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 11:36:58.670559   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetState
	I0520 11:36:58.671802   54732 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 11:36:58.671815   54732 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 11:36:58.671820   54732 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 11:36:58.671826   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:58.673985   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.674327   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:58.674357   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.674419   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:58.674570   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.674717   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.674863   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:58.675080   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:58.675315   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:58.675331   54732 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 11:36:58.788510   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:36:58.788531   54732 main.go:141] libmachine: Detecting the provisioner...
	I0520 11:36:58.788538   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:58.791777   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.792093   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:58.792120   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.792298   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:58.792515   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.792694   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.792833   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:58.792980   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:58.793159   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:58.793169   54732 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 11:36:58.905732   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 11:36:58.905795   54732 main.go:141] libmachine: found compatible host: buildroot
	I0520 11:36:58.905808   54732 main.go:141] libmachine: Provisioning with buildroot...
	I0520 11:36:58.905817   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:36:58.906037   54732 buildroot.go:166] provisioning hostname "old-k8s-version-210420"
	I0520 11:36:58.906069   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:36:58.906235   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:58.908812   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.909252   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:58.909279   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.909457   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:58.909640   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.909803   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.909963   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:58.910137   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:58.910360   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:58.910382   54732 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210420 && echo "old-k8s-version-210420" | sudo tee /etc/hostname
	I0520 11:36:59.043365   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210420
	
	I0520 11:36:59.043395   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:59.046494   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.047005   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.047029   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.047181   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:59.047365   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.047541   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.047678   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:59.047866   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:59.048197   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:59.048223   54732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:36:59.175659   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:36:59.175698   54732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:36:59.175720   54732 buildroot.go:174] setting up certificates
	I0520 11:36:59.175734   54732 provision.go:84] configureAuth start
	I0520 11:36:59.175750   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:36:59.176024   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:36:59.179114   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.179481   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.179513   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.179636   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:59.182365   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.182693   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.182728   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.182892   54732 provision.go:143] copyHostCerts
	I0520 11:36:59.182948   54732 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:36:59.182963   54732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:36:59.183017   54732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:36:59.183120   54732 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:36:59.183130   54732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:36:59.183162   54732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:36:59.183225   54732 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:36:59.183232   54732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:36:59.183252   54732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:36:59.183325   54732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210420 san=[127.0.0.1 192.168.83.100 localhost minikube old-k8s-version-210420]
	I0520 11:36:59.516574   54732 provision.go:177] copyRemoteCerts
	I0520 11:36:59.516648   54732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:36:59.516694   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:59.520890   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.521316   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.521452   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.521534   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:59.521799   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.521994   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:59.522137   54732 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:36:59.615162   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:36:59.639370   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 11:36:59.669948   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:36:59.697465   54732 provision.go:87] duration metric: took 521.714661ms to configureAuth
	I0520 11:36:59.697495   54732 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:36:59.697690   54732 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:36:59.697772   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:59.700598   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.700997   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.701028   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.701242   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:59.701414   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.701561   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.701663   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:59.701879   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:59.702086   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:59.702102   54732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:36:57.808628   55177 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:36:57.808751   55177 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json ...
	I0520 11:36:57.808782   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json: {Name:mkec34da3f05f1bf45cb4aef1e29a182c314b63c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:36:57.808855   55177 cache.go:107] acquiring lock: {Name:mk46c7017dd2c3cce534f4173742b327488567e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.808895   55177 cache.go:107] acquiring lock: {Name:mk11b285bf8b40bdd567397b87b729e6c613cbbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.808947   55177 cache.go:107] acquiring lock: {Name:mkf1a1c7b346d55bc111333ed6814f4739fb8a8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.808972   55177 start.go:360] acquireMachinesLock for no-preload-814599: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:36:57.809002   55177 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:36:57.809019   55177 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:36:57.809052   55177 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 11:36:57.809009   55177 cache.go:107] acquiring lock: {Name:mkfd235693f296d62787eba91a01a7cbf9d71206 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.809024   55177 cache.go:107] acquiring lock: {Name:mkf72f42066e10532cd7a6f248d435645a1b3916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.808868   55177 cache.go:107] acquiring lock: {Name:mkee7b66a56ccefc27f3c797a42e758b7405686e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.809024   55177 cache.go:107] acquiring lock: {Name:mk0994b0f95001ba6a6a39dda38f18a1ab80ab78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.809191   55177 cache.go:115] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0520 11:36:57.809210   55177 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:36:57.809244   55177 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:36:57.809263   55177 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:36:57.809206   55177 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 341.402µs
	I0520 11:36:57.809354   55177 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0520 11:36:57.809120   55177 cache.go:107] acquiring lock: {Name:mk09942a59dde7576e41568bb51f124628a11712 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:57.809598   55177 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:36:57.810488   55177 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:36:57.810643   55177 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:36:57.810770   55177 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:36:57.811202   55177 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 11:36:57.811227   55177 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:36:57.811565   55177 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:36:57.811603   55177 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:36:57.985168   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 11:36:58.046815   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 11:36:58.152865   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 11:36:58.155397   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0520 11:36:58.155440   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 11:36:58.157384   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0520 11:36:58.175790   55177 cache.go:162] opening:  /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 11:36:58.231723   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0520 11:36:58.231748   55177 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 422.825832ms
	I0520 11:36:58.231758   55177 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0520 11:36:58.278313   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0520 11:36:58.278340   55177 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1" took 469.363676ms
	I0520 11:36:58.278355   55177 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0520 11:36:59.049503   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0520 11:36:59.049528   55177 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.24056843s
	I0520 11:36:59.049540   55177 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0520 11:36:59.464515   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0520 11:36:59.464572   55177 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1" took 1.655502108s
	I0520 11:36:59.464587   55177 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0520 11:36:59.530024   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0520 11:36:59.530053   55177 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1" took 1.721210921s
	I0520 11:36:59.530067   55177 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0520 11:36:59.541542   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0520 11:36:59.541565   55177 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1" took 1.73259042s
	I0520 11:36:59.541575   55177 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0520 11:36:59.579247   55177 cache.go:157] /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 exists
	I0520 11:36:59.579272   55177 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0" took 1.77037797s
	I0520 11:36:59.579282   55177 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0520 11:36:59.579296   55177 cache.go:87] Successfully saved all images to host disk.
	I0520 11:37:00.294102   55177 start.go:364] duration metric: took 2.485109879s to acquireMachinesLock for "no-preload-814599"
	I0520 11:37:00.294157   55177 start.go:93] Provisioning new machine with config: &{Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:37:00.294274   55177 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 11:36:55.893603   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:55.893672   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:36:55.893688   49001 cri.go:89] found id: ""
	I0520 11:36:55.893697   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:36:55.893760   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:55.898164   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:55.902606   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:36:55.902675   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:36:55.955942   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:55.955970   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:55.955976   49001 cri.go:89] found id: ""
	I0520 11:36:55.955984   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:36:55.956040   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:55.961419   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:55.965977   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:36:55.966041   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:36:56.007439   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:56.007467   49001 cri.go:89] found id: ""
	I0520 11:36:56.007477   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:36:56.007535   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:56.013294   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:36:56.013360   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:36:56.051025   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:36:56.051054   49001 cri.go:89] found id: ""
	I0520 11:36:56.051064   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:36:56.051119   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:56.055384   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:36:56.055453   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:36:56.093343   49001 cri.go:89] found id: ""
	I0520 11:36:56.093368   49001 logs.go:276] 0 containers: []
	W0520 11:36:56.093379   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:36:56.093397   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:36:56.093412   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:36:56.175079   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:36:56.175107   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:36:56.175122   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:56.253081   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:36:56.253110   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:56.299222   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:36:56.299254   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:36:56.340853   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:36:56.340882   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:36:56.359408   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:36:56.359439   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:56.406249   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:36:56.406276   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:56.444515   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:36:56.444545   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:56.486746   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:36:56.486783   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:36:56.841246   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:36:56.841277   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:36:56.889777   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:36:56.889802   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:56.931813   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:36:56.931847   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:57.006897   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:36:57.006924   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:36:57.104536   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:36:57.104570   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:36:57.139795   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:36:57Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:36:57Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:36:59.640325   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:36:59.658995   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:36:59.659063   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:36:59.700016   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:36:59.700040   49001 cri.go:89] found id: ""
	I0520 11:36:59.700048   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:36:59.700097   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.705235   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:36:59.705303   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:36:59.750870   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:36:59.750902   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:36:59.750908   49001 cri.go:89] found id: ""
	I0520 11:36:59.750916   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:36:59.750970   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.755675   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.759495   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:36:59.759555   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:36:59.795202   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:36:59.795227   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:36:59.795232   49001 cri.go:89] found id: ""
	I0520 11:36:59.795239   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:36:59.795296   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.799284   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.803686   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:36:59.803737   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:36:59.847686   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:36:59.847715   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:36:59.847721   49001 cri.go:89] found id: ""
	I0520 11:36:59.847731   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:36:59.847786   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.852253   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.856208   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:36:59.856261   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:36:59.894425   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:36:59.894449   49001 cri.go:89] found id: ""
	I0520 11:36:59.894456   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:36:59.894506   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.899068   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:36:59.899137   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:36:59.937477   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:36:59.937504   49001 cri.go:89] found id: ""
	I0520 11:36:59.937512   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:36:59.937567   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:36:59.941932   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:36:59.942004   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:36:59.982641   49001 cri.go:89] found id: ""
	I0520 11:36:59.982672   49001 logs.go:276] 0 containers: []
	W0520 11:36:59.982682   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:36:59.982701   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:36:59.982716   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:00.026137   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:00.026179   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:00.076692   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:00.076720   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:00.120618   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:00.120657   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:00.136104   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:00.136126   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:00.212381   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:00.212411   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:00.288774   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:00.288808   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:00.621731   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:00.621763   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:00.718258   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:00.718295   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:00.761352   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:00.761378   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:00.837420   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:00.837449   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:00.837464   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:00.874929   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:00.874957   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:00.030830   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:37:00.030863   54732 main.go:141] libmachine: Checking connection to Docker...
	I0520 11:37:00.030875   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetURL
	I0520 11:37:00.032252   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using libvirt version 6000000
	I0520 11:37:00.034407   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.034740   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.034776   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.034998   54732 main.go:141] libmachine: Docker is up and running!
	I0520 11:37:00.035022   54732 main.go:141] libmachine: Reticulating splines...
	I0520 11:37:00.035028   54732 client.go:171] duration metric: took 25.015121735s to LocalClient.Create
	I0520 11:37:00.035057   54732 start.go:167] duration metric: took 25.015189379s to libmachine.API.Create "old-k8s-version-210420"
	I0520 11:37:00.035076   54732 start.go:293] postStartSetup for "old-k8s-version-210420" (driver="kvm2")
	I0520 11:37:00.035090   54732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:37:00.035116   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.035406   54732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:37:00.035442   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:37:00.037625   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.037965   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.037999   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.038068   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:37:00.038248   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.038449   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:37:00.038628   54732 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:37:00.128149   54732 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:37:00.132646   54732 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:37:00.132670   54732 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:37:00.132739   54732 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:37:00.132838   54732 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:37:00.132998   54732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:37:00.142874   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:37:00.169803   54732 start.go:296] duration metric: took 134.712167ms for postStartSetup
	I0520 11:37:00.169866   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:37:00.170434   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:37:00.173263   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.173608   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.173633   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.173934   54732 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:37:00.174155   54732 start.go:128] duration metric: took 25.173945173s to createHost
	I0520 11:37:00.174185   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:37:00.176961   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.177375   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.177397   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.177708   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:37:00.177888   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.178074   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.178247   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:37:00.178432   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:00.178642   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:37:00.178659   54732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:37:00.293928   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205020.272115335
	
	I0520 11:37:00.293949   54732 fix.go:216] guest clock: 1716205020.272115335
	I0520 11:37:00.293959   54732 fix.go:229] Guest: 2024-05-20 11:37:00.272115335 +0000 UTC Remote: 2024-05-20 11:37:00.174170865 +0000 UTC m=+25.282981176 (delta=97.94447ms)
	I0520 11:37:00.294015   54732 fix.go:200] guest clock delta is within tolerance: 97.94447ms
	I0520 11:37:00.294023   54732 start.go:83] releasing machines lock for "old-k8s-version-210420", held for 25.293893571s
	I0520 11:37:00.294050   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.294323   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:37:00.297134   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.297565   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.297596   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.297885   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.298445   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.298625   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.298696   54732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:37:00.298744   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:37:00.298828   54732 ssh_runner.go:195] Run: cat /version.json
	I0520 11:37:00.298860   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:37:00.301978   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.302405   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.302462   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.302490   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.302601   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:37:00.302743   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.302885   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:37:00.302980   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.303000   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.303029   54732 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:37:00.303310   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:37:00.303464   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.303598   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:37:00.303715   54732 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	W0520 11:37:00.415297   54732 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:37:00.415384   54732 ssh_runner.go:195] Run: systemctl --version
	I0520 11:37:00.422441   54732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:37:00.586815   54732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:37:00.594067   54732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:37:00.594148   54732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:37:00.610884   54732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:37:00.610907   54732 start.go:494] detecting cgroup driver to use...
	I0520 11:37:00.610976   54732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:37:00.628469   54732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:37:00.644580   54732 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:37:00.644642   54732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:37:00.659819   54732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:37:00.673915   54732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:37:00.804460   54732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:37:00.986774   54732 docker.go:233] disabling docker service ...
	I0520 11:37:00.986844   54732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:37:01.004845   54732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:37:01.019356   54732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:37:01.148680   54732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:37:01.269540   54732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:37:01.285312   54732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:37:01.304010   54732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 11:37:01.304084   54732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:01.316378   54732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:37:01.316451   54732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:01.327728   54732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:01.338698   54732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:01.349526   54732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:37:01.360959   54732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:37:01.373008   54732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:37:01.373060   54732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:37:01.386782   54732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:37:01.397163   54732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:37:01.518110   54732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:37:01.665579   54732 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:37:01.665652   54732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:37:01.670666   54732 start.go:562] Will wait 60s for crictl version
	I0520 11:37:01.670709   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:01.674485   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:37:01.728836   54732 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:37:01.728982   54732 ssh_runner.go:195] Run: crio --version
	I0520 11:37:01.760458   54732 ssh_runner.go:195] Run: crio --version
	I0520 11:37:01.792553   54732 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 11:37:00.296371   55177 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 11:37:00.296581   55177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:37:00.296642   55177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:37:00.315548   55177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
	I0520 11:37:00.316070   55177 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:37:00.316715   55177 main.go:141] libmachine: Using API Version  1
	I0520 11:37:00.316736   55177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:37:00.317283   55177 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:37:00.317518   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:37:00.317838   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:00.321161   55177 start.go:159] libmachine.API.Create for "no-preload-814599" (driver="kvm2")
	I0520 11:37:00.321194   55177 client.go:168] LocalClient.Create starting
	I0520 11:37:00.321226   55177 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 11:37:00.321268   55177 main.go:141] libmachine: Decoding PEM data...
	I0520 11:37:00.321288   55177 main.go:141] libmachine: Parsing certificate...
	I0520 11:37:00.321359   55177 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 11:37:00.321387   55177 main.go:141] libmachine: Decoding PEM data...
	I0520 11:37:00.321400   55177 main.go:141] libmachine: Parsing certificate...
	I0520 11:37:00.321435   55177 main.go:141] libmachine: Running pre-create checks...
	I0520 11:37:00.321448   55177 main.go:141] libmachine: (no-preload-814599) Calling .PreCreateCheck
	I0520 11:37:00.321854   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetConfigRaw
	I0520 11:37:00.322302   55177 main.go:141] libmachine: Creating machine...
	I0520 11:37:00.322319   55177 main.go:141] libmachine: (no-preload-814599) Calling .Create
	I0520 11:37:00.322474   55177 main.go:141] libmachine: (no-preload-814599) Creating KVM machine...
	I0520 11:37:00.323915   55177 main.go:141] libmachine: (no-preload-814599) DBG | found existing default KVM network
	I0520 11:37:00.325990   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:00.325828   55243 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0e0}
	I0520 11:37:00.326026   55177 main.go:141] libmachine: (no-preload-814599) DBG | created network xml: 
	I0520 11:37:00.326044   55177 main.go:141] libmachine: (no-preload-814599) DBG | <network>
	I0520 11:37:00.326052   55177 main.go:141] libmachine: (no-preload-814599) DBG |   <name>mk-no-preload-814599</name>
	I0520 11:37:00.326068   55177 main.go:141] libmachine: (no-preload-814599) DBG |   <dns enable='no'/>
	I0520 11:37:00.326079   55177 main.go:141] libmachine: (no-preload-814599) DBG |   
	I0520 11:37:00.326090   55177 main.go:141] libmachine: (no-preload-814599) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 11:37:00.326100   55177 main.go:141] libmachine: (no-preload-814599) DBG |     <dhcp>
	I0520 11:37:00.326114   55177 main.go:141] libmachine: (no-preload-814599) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 11:37:00.326126   55177 main.go:141] libmachine: (no-preload-814599) DBG |     </dhcp>
	I0520 11:37:00.326139   55177 main.go:141] libmachine: (no-preload-814599) DBG |   </ip>
	I0520 11:37:00.326149   55177 main.go:141] libmachine: (no-preload-814599) DBG |   
	I0520 11:37:00.326176   55177 main.go:141] libmachine: (no-preload-814599) DBG | </network>
	I0520 11:37:00.326187   55177 main.go:141] libmachine: (no-preload-814599) DBG | 
	I0520 11:37:00.332189   55177 main.go:141] libmachine: (no-preload-814599) DBG | trying to create private KVM network mk-no-preload-814599 192.168.39.0/24...
	I0520 11:37:00.411151   55177 main.go:141] libmachine: (no-preload-814599) DBG | private KVM network mk-no-preload-814599 192.168.39.0/24 created
	I0520 11:37:00.411190   55177 main.go:141] libmachine: (no-preload-814599) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599 ...
	I0520 11:37:00.411206   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:00.411077   55243 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:37:00.411235   55177 main.go:141] libmachine: (no-preload-814599) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 11:37:00.411256   55177 main.go:141] libmachine: (no-preload-814599) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 11:37:00.644043   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:00.643935   55243 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa...
	I0520 11:37:00.877196   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:00.877035   55243 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/no-preload-814599.rawdisk...
	I0520 11:37:00.877234   55177 main.go:141] libmachine: (no-preload-814599) DBG | Writing magic tar header
	I0520 11:37:00.877251   55177 main.go:141] libmachine: (no-preload-814599) DBG | Writing SSH key tar header
	I0520 11:37:00.877264   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:00.877182   55243 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599 ...
	I0520 11:37:00.877325   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599
	I0520 11:37:00.877378   55177 main.go:141] libmachine: (no-preload-814599) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599 (perms=drwx------)
	I0520 11:37:00.877394   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 11:37:00.877407   55177 main.go:141] libmachine: (no-preload-814599) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 11:37:00.877435   55177 main.go:141] libmachine: (no-preload-814599) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 11:37:00.877453   55177 main.go:141] libmachine: (no-preload-814599) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 11:37:00.877467   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:37:00.877494   55177 main.go:141] libmachine: (no-preload-814599) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 11:37:00.877508   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 11:37:00.877523   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 11:37:00.877536   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home/jenkins
	I0520 11:37:00.877548   55177 main.go:141] libmachine: (no-preload-814599) DBG | Checking permissions on dir: /home
	I0520 11:37:00.877559   55177 main.go:141] libmachine: (no-preload-814599) DBG | Skipping /home - not owner
	I0520 11:37:00.877602   55177 main.go:141] libmachine: (no-preload-814599) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 11:37:00.877631   55177 main.go:141] libmachine: (no-preload-814599) Creating domain...
	I0520 11:37:00.878796   55177 main.go:141] libmachine: (no-preload-814599) define libvirt domain using xml: 
	I0520 11:37:00.878815   55177 main.go:141] libmachine: (no-preload-814599) <domain type='kvm'>
	I0520 11:37:00.878845   55177 main.go:141] libmachine: (no-preload-814599)   <name>no-preload-814599</name>
	I0520 11:37:00.878869   55177 main.go:141] libmachine: (no-preload-814599)   <memory unit='MiB'>2200</memory>
	I0520 11:37:00.878879   55177 main.go:141] libmachine: (no-preload-814599)   <vcpu>2</vcpu>
	I0520 11:37:00.878890   55177 main.go:141] libmachine: (no-preload-814599)   <features>
	I0520 11:37:00.878899   55177 main.go:141] libmachine: (no-preload-814599)     <acpi/>
	I0520 11:37:00.878909   55177 main.go:141] libmachine: (no-preload-814599)     <apic/>
	I0520 11:37:00.878921   55177 main.go:141] libmachine: (no-preload-814599)     <pae/>
	I0520 11:37:00.878934   55177 main.go:141] libmachine: (no-preload-814599)     
	I0520 11:37:00.878947   55177 main.go:141] libmachine: (no-preload-814599)   </features>
	I0520 11:37:00.878963   55177 main.go:141] libmachine: (no-preload-814599)   <cpu mode='host-passthrough'>
	I0520 11:37:00.878983   55177 main.go:141] libmachine: (no-preload-814599)   
	I0520 11:37:00.878997   55177 main.go:141] libmachine: (no-preload-814599)   </cpu>
	I0520 11:37:00.879005   55177 main.go:141] libmachine: (no-preload-814599)   <os>
	I0520 11:37:00.879012   55177 main.go:141] libmachine: (no-preload-814599)     <type>hvm</type>
	I0520 11:37:00.879021   55177 main.go:141] libmachine: (no-preload-814599)     <boot dev='cdrom'/>
	I0520 11:37:00.879029   55177 main.go:141] libmachine: (no-preload-814599)     <boot dev='hd'/>
	I0520 11:37:00.879035   55177 main.go:141] libmachine: (no-preload-814599)     <bootmenu enable='no'/>
	I0520 11:37:00.879039   55177 main.go:141] libmachine: (no-preload-814599)   </os>
	I0520 11:37:00.879045   55177 main.go:141] libmachine: (no-preload-814599)   <devices>
	I0520 11:37:00.879065   55177 main.go:141] libmachine: (no-preload-814599)     <disk type='file' device='cdrom'>
	I0520 11:37:00.879082   55177 main.go:141] libmachine: (no-preload-814599)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/boot2docker.iso'/>
	I0520 11:37:00.879095   55177 main.go:141] libmachine: (no-preload-814599)       <target dev='hdc' bus='scsi'/>
	I0520 11:37:00.879103   55177 main.go:141] libmachine: (no-preload-814599)       <readonly/>
	I0520 11:37:00.879111   55177 main.go:141] libmachine: (no-preload-814599)     </disk>
	I0520 11:37:00.879124   55177 main.go:141] libmachine: (no-preload-814599)     <disk type='file' device='disk'>
	I0520 11:37:00.879137   55177 main.go:141] libmachine: (no-preload-814599)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 11:37:00.879152   55177 main.go:141] libmachine: (no-preload-814599)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/no-preload-814599.rawdisk'/>
	I0520 11:37:00.879165   55177 main.go:141] libmachine: (no-preload-814599)       <target dev='hda' bus='virtio'/>
	I0520 11:37:00.879195   55177 main.go:141] libmachine: (no-preload-814599)     </disk>
	I0520 11:37:00.879219   55177 main.go:141] libmachine: (no-preload-814599)     <interface type='network'>
	I0520 11:37:00.879261   55177 main.go:141] libmachine: (no-preload-814599)       <source network='mk-no-preload-814599'/>
	I0520 11:37:00.879277   55177 main.go:141] libmachine: (no-preload-814599)       <model type='virtio'/>
	I0520 11:37:00.879287   55177 main.go:141] libmachine: (no-preload-814599)     </interface>
	I0520 11:37:00.879298   55177 main.go:141] libmachine: (no-preload-814599)     <interface type='network'>
	I0520 11:37:00.879311   55177 main.go:141] libmachine: (no-preload-814599)       <source network='default'/>
	I0520 11:37:00.879322   55177 main.go:141] libmachine: (no-preload-814599)       <model type='virtio'/>
	I0520 11:37:00.879335   55177 main.go:141] libmachine: (no-preload-814599)     </interface>
	I0520 11:37:00.879344   55177 main.go:141] libmachine: (no-preload-814599)     <serial type='pty'>
	I0520 11:37:00.879352   55177 main.go:141] libmachine: (no-preload-814599)       <target port='0'/>
	I0520 11:37:00.879362   55177 main.go:141] libmachine: (no-preload-814599)     </serial>
	I0520 11:37:00.879371   55177 main.go:141] libmachine: (no-preload-814599)     <console type='pty'>
	I0520 11:37:00.879381   55177 main.go:141] libmachine: (no-preload-814599)       <target type='serial' port='0'/>
	I0520 11:37:00.879390   55177 main.go:141] libmachine: (no-preload-814599)     </console>
	I0520 11:37:00.879400   55177 main.go:141] libmachine: (no-preload-814599)     <rng model='virtio'>
	I0520 11:37:00.879409   55177 main.go:141] libmachine: (no-preload-814599)       <backend model='random'>/dev/random</backend>
	I0520 11:37:00.879417   55177 main.go:141] libmachine: (no-preload-814599)     </rng>
	I0520 11:37:00.879424   55177 main.go:141] libmachine: (no-preload-814599)     
	I0520 11:37:00.879441   55177 main.go:141] libmachine: (no-preload-814599)     
	I0520 11:37:00.879453   55177 main.go:141] libmachine: (no-preload-814599)   </devices>
	I0520 11:37:00.879463   55177 main.go:141] libmachine: (no-preload-814599) </domain>
	I0520 11:37:00.879474   55177 main.go:141] libmachine: (no-preload-814599) 
	I0520 11:37:00.883084   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:5d:e0:9c in network default
	I0520 11:37:00.883705   55177 main.go:141] libmachine: (no-preload-814599) Ensuring networks are active...
	I0520 11:37:00.883725   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:00.884498   55177 main.go:141] libmachine: (no-preload-814599) Ensuring network default is active
	I0520 11:37:00.884808   55177 main.go:141] libmachine: (no-preload-814599) Ensuring network mk-no-preload-814599 is active
	I0520 11:37:00.885345   55177 main.go:141] libmachine: (no-preload-814599) Getting domain xml...
	I0520 11:37:00.886162   55177 main.go:141] libmachine: (no-preload-814599) Creating domain...
	I0520 11:37:02.212827   55177 main.go:141] libmachine: (no-preload-814599) Waiting to get IP...
	I0520 11:37:02.214013   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:02.214548   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:02.214612   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:02.214536   55243 retry.go:31] will retry after 230.354164ms: waiting for machine to come up
	I0520 11:37:02.447322   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:02.447961   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:02.447989   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:02.447909   55243 retry.go:31] will retry after 280.443553ms: waiting for machine to come up
	I0520 11:37:02.730530   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:02.731050   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:02.731082   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:02.730990   55243 retry.go:31] will retry after 326.461214ms: waiting for machine to come up
	I0520 11:37:01.793799   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:37:01.796811   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:01.797255   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:01.797279   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:01.797553   54732 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0520 11:37:01.801923   54732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:37:01.814795   54732 kubeadm.go:877] updating cluster {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:37:01.814936   54732 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:37:01.815002   54732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:37:01.846574   54732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:37:01.846646   54732 ssh_runner.go:195] Run: which lz4
	I0520 11:37:01.850943   54732 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:37:01.855547   54732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:37:01.855574   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 11:37:03.680746   54732 crio.go:462] duration metric: took 1.829828922s to copy over tarball
	I0520 11:37:03.680826   54732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	W0520 11:37:00.911127   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:00Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:00Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:00.911147   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:00.911162   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:00.951935   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:00.951964   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:03.503295   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:03.524005   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:03.524072   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:03.587506   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:03.587532   49001 cri.go:89] found id: ""
	I0520 11:37:03.587541   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:03.587596   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.593553   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:03.593623   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:03.650465   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:03.650489   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:03.650495   49001 cri.go:89] found id: ""
	I0520 11:37:03.650510   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:03.650566   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.656299   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.662229   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:03.662294   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:03.714388   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:03.714413   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:03.714418   49001 cri.go:89] found id: ""
	I0520 11:37:03.714426   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:03.714479   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.719414   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.725329   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:03.725402   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:03.767484   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:03.767509   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:03.767514   49001 cri.go:89] found id: ""
	I0520 11:37:03.767523   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:03.767580   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.772446   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.778020   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:03.778089   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:03.816734   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:03.816761   49001 cri.go:89] found id: ""
	I0520 11:37:03.816770   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:03.816830   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.821555   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:03.821619   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:03.865361   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:03.865393   49001 cri.go:89] found id: ""
	I0520 11:37:03.865402   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:03.865462   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:03.872164   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:03.872238   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:03.917750   49001 cri.go:89] found id: ""
	I0520 11:37:03.917778   49001 logs.go:276] 0 containers: []
	W0520 11:37:03.917794   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:03.917812   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:03.917830   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:04.016511   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:04.016561   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:04.036288   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:04.036321   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:04.077030   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:04Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:04Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:04.077051   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:04.077063   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:04.129084   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:04.129122   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:04.212662   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:04.212682   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:04.212694   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:04.257399   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:04.257428   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:04.353803   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:04.353848   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:04.426381   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:04.426412   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:04.511533   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:04.511572   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:04.554295   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:04.554322   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:04.605411   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:04.605452   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:04.653059   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:04.653084   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:05.025930   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:05.025987   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:03.059936   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:03.060553   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:03.060584   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:03.060493   55243 retry.go:31] will retry after 426.155065ms: waiting for machine to come up
	I0520 11:37:03.488253   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:03.488813   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:03.488845   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:03.488768   55243 retry.go:31] will retry after 654.064749ms: waiting for machine to come up
	I0520 11:37:04.144322   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:04.144912   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:04.144948   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:04.144823   55243 retry.go:31] will retry after 883.921739ms: waiting for machine to come up
	I0520 11:37:05.030283   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:05.030696   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:05.030732   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:05.030660   55243 retry.go:31] will retry after 1.024526338s: waiting for machine to come up
	I0520 11:37:06.056497   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:06.057125   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:06.057164   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:06.057059   55243 retry.go:31] will retry after 1.473243532s: waiting for machine to come up
	I0520 11:37:07.532663   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:07.533190   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:07.533215   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:07.533139   55243 retry.go:31] will retry after 1.168974888s: waiting for machine to come up
	I0520 11:37:06.452881   54732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.77202524s)
	I0520 11:37:06.452914   54732 crio.go:469] duration metric: took 2.77213672s to extract the tarball
	I0520 11:37:06.452948   54732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:37:06.496108   54732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:37:06.542385   54732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:37:06.542414   54732 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:37:06.542466   54732 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:06.542500   54732 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.542517   54732 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:06.542528   54732 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.542579   54732 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 11:37:06.542620   54732 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.542508   54732 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.542588   54732 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 11:37:06.543881   54732 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.543932   54732 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.544060   54732 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 11:37:06.544066   54732 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.544057   54732 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:06.544093   54732 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.544088   54732 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 11:37:06.544234   54732 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:06.689722   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.692704   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.700348   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.715480   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.736341   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 11:37:06.740710   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:06.745468   54732 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 11:37:06.745506   54732 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.745544   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.755418   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 11:37:06.845909   54732 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 11:37:06.845942   54732 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 11:37:06.845961   54732 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.845977   54732 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.846011   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.846035   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.893251   54732 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 11:37:06.893310   54732 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 11:37:06.893290   54732 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 11:37:06.893347   54732 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.893361   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.893406   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.904838   54732 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 11:37:06.904904   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.904920   54732 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:06.904968   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.910151   54732 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 11:37:06.910188   54732 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 11:37:06.910215   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.910228   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.910230   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.910288   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 11:37:06.910363   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.912545   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:07.003917   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 11:37:07.021998   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 11:37:07.038760   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 11:37:07.038853   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 11:37:07.038887   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 11:37:07.039429   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 11:37:07.042472   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 11:37:07.075138   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 11:37:07.461848   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:07.605217   54732 cache_images.go:92] duration metric: took 1.062783538s to LoadCachedImages
	W0520 11:37:07.605318   54732 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0520 11:37:07.605336   54732 kubeadm.go:928] updating node { 192.168.83.100 8443 v1.20.0 crio true true} ...
	I0520 11:37:07.605472   54732 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:37:07.605583   54732 ssh_runner.go:195] Run: crio config
	I0520 11:37:07.668258   54732 cni.go:84] Creating CNI manager for ""
	I0520 11:37:07.668286   54732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:37:07.668310   54732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:37:07.668337   54732 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.100 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210420 NodeName:old-k8s-version-210420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:37:07.668520   54732 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210420"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:37:07.668591   54732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:37:07.679647   54732 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:37:07.679713   54732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:37:07.689918   54732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0520 11:37:07.711056   54732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:37:07.730271   54732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0520 11:37:07.751969   54732 ssh_runner.go:195] Run: grep 192.168.83.100	control-plane.minikube.internal$ /etc/hosts
	I0520 11:37:07.756727   54732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:37:07.771312   54732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:37:07.893449   54732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:37:07.911536   54732 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420 for IP: 192.168.83.100
	I0520 11:37:07.911562   54732 certs.go:194] generating shared ca certs ...
	I0520 11:37:07.911582   54732 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:07.911754   54732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:37:07.911807   54732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:37:07.911819   54732 certs.go:256] generating profile certs ...
	I0520 11:37:07.911894   54732 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key
	I0520 11:37:07.911913   54732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt with IP's: []
	I0520 11:37:08.010690   54732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt ...
	I0520 11:37:08.010774   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: {Name:mk37e4d8e6b70b9df4bd1f33a3bb1ce774741c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.010992   54732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key ...
	I0520 11:37:08.011048   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key: {Name:mkef658583b929f627b0a4742ad863dab8569cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.011194   54732 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9
	I0520 11:37:08.011251   54732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt.a7d5d0f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.100]
	I0520 11:37:08.123361   54732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt.a7d5d0f9 ...
	I0520 11:37:08.123391   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt.a7d5d0f9: {Name:mk126bf7eda8dbc6aeee644e3e54901634a644ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.123558   54732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9 ...
	I0520 11:37:08.123577   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9: {Name:mkf6c45a2b2e90a3cb664c8a2d9327ba46d6c43e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.123773   54732 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt.a7d5d0f9 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt
	I0520 11:37:08.123864   54732 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key
	I0520 11:37:08.123915   54732 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key
	I0520 11:37:08.123933   54732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt with IP's: []
	I0520 11:37:08.287603   54732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt ...
	I0520 11:37:08.287636   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt: {Name:mk5a61fe0b766f2d87c141b576ed804ead7b2cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.287789   54732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key ...
	I0520 11:37:08.287803   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key: {Name:mk6ecec165027e23fe429021af4b2049861c5527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.287990   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:37:08.288041   54732 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:37:08.288052   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:37:08.288094   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:37:08.288129   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:37:08.288162   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:37:08.288214   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:37:08.288774   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:37:08.323616   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:37:08.353034   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:37:08.383245   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:37:08.412983   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 11:37:08.443285   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:37:08.474262   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:37:08.507794   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 11:37:08.542166   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:37:08.572287   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:37:08.603134   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:37:08.633088   54732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:37:08.650484   54732 ssh_runner.go:195] Run: openssl version
	I0520 11:37:08.657729   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:37:08.669476   54732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:08.674540   54732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:08.674594   54732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:08.682466   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:37:08.694751   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:37:08.709664   54732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:37:08.715955   54732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:37:08.716013   54732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:37:08.724137   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:37:08.736195   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:37:08.747804   54732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:37:08.754314   54732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:37:08.754379   54732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:37:08.762305   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:37:08.777044   54732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:37:08.782996   54732 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 11:37:08.783051   54732 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:37:08.783153   54732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:37:08.783206   54732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:37:08.832683   54732 cri.go:89] found id: ""
	I0520 11:37:08.832748   54732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 11:37:08.843324   54732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:37:08.853560   54732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:37:08.863070   54732 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:37:08.863089   54732 kubeadm.go:156] found existing configuration files:
	
	I0520 11:37:08.863140   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:37:08.876732   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:37:08.876795   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:37:08.888527   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:37:08.904546   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:37:08.904618   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:37:08.920506   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:37:08.932922   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:37:08.932998   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:37:08.954646   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:37:08.966058   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:37:08.966132   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:37:08.978052   54732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:37:09.252760   54732 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:37:07.576280   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:07.591398   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:07.591463   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:07.645086   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:07.645121   49001 cri.go:89] found id: ""
	I0520 11:37:07.645131   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:07.645185   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.651098   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:07.651160   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:07.697226   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:07.697254   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:07.697259   49001 cri.go:89] found id: ""
	I0520 11:37:07.697280   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:07.697350   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.702815   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.708988   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:07.709066   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:07.753589   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:07.753612   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:07.753617   49001 cri.go:89] found id: ""
	I0520 11:37:07.753625   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:07.753673   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.758494   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.762958   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:07.763018   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:07.806489   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:07.806519   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:07.806526   49001 cri.go:89] found id: ""
	I0520 11:37:07.806536   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:07.806594   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.813114   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.818058   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:07.818128   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:07.862362   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:07.862389   49001 cri.go:89] found id: ""
	I0520 11:37:07.862398   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:07.862450   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.867347   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:07.867414   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:07.906940   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:07.906966   49001 cri.go:89] found id: ""
	I0520 11:37:07.906975   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:07.907033   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:07.912776   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:07.912845   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:07.961828   49001 cri.go:89] found id: ""
	I0520 11:37:07.961908   49001 logs.go:276] 0 containers: []
	W0520 11:37:07.961929   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:07.961955   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:07.962001   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:08.333365   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:08.333400   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:08.417686   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:08.417715   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:08.417733   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:08.488659   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:08.488690   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:08.536273   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:08Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:08Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:08.536294   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:08.536308   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:08.628212   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:08.628247   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:08.680938   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:08.680967   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:08.725538   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:08.725566   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:08.770191   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:08.770213   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:08.871237   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:08.871265   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:08.892521   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:08.892556   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:08.948426   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:08.948468   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:09.002843   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:09.002875   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:09.045892   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:09.045925   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:08.703262   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:08.703812   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:08.703838   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:08.703768   55243 retry.go:31] will retry after 2.0654703s: waiting for machine to come up
	I0520 11:37:10.771048   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:10.771624   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:10.771660   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:10.771564   55243 retry.go:31] will retry after 1.968841858s: waiting for machine to come up
	I0520 11:37:11.599472   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:11.624938   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:11.625025   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:11.684315   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:11.684339   49001 cri.go:89] found id: ""
	I0520 11:37:11.684348   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:11.684406   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.690414   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:11.690483   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:11.736757   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:11.736789   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:11.736794   49001 cri.go:89] found id: ""
	I0520 11:37:11.736813   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:11.736877   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.741539   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.746899   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:11.746952   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:11.788160   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:11.788183   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:11.788189   49001 cri.go:89] found id: ""
	I0520 11:37:11.788198   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:11.788246   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.792882   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.797112   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:11.797200   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:11.843558   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:11.843584   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:11.843587   49001 cri.go:89] found id: ""
	I0520 11:37:11.843594   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:11.843639   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.849537   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.855114   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:11.855179   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:11.898529   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:11.898556   49001 cri.go:89] found id: ""
	I0520 11:37:11.898566   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:11.898629   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.903372   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:11.903487   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:11.947460   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:11.947488   49001 cri.go:89] found id: ""
	I0520 11:37:11.947497   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:11.947556   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:11.952747   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:11.952819   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:12.001075   49001 cri.go:89] found id: ""
	I0520 11:37:12.001103   49001 logs.go:276] 0 containers: []
	W0520 11:37:12.001112   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:12.001131   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:12.001146   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:12.020264   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:12.020297   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:12.071740   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:12.071774   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:12.158604   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:12.158645   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:12.461694   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:12.461736   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:12.502521   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:12.502558   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:12.546362   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:12.546401   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:12.595362   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:12.595392   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:12.688172   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:12.688207   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:12.762044   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:12.762074   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:12.762089   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:12.844573   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:12.844605   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:12.888586   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:12.888616   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:12.927295   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:12Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:12Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:12.927322   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:12.927333   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:12.977584   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:12.977616   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:15.516297   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:15.531544   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:15.531619   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:15.578203   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:15.578229   49001 cri.go:89] found id: ""
	I0520 11:37:15.578239   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:15.578297   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.583441   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:15.583512   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:15.624751   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:15.624770   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:15.624775   49001 cri.go:89] found id: ""
	I0520 11:37:15.624781   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:15.624826   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.629294   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.633366   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:15.633449   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:15.671090   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:15.671111   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:15.671114   49001 cri.go:89] found id: ""
	I0520 11:37:15.671121   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:15.671164   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.675459   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.679636   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:15.679686   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:15.714170   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:15.714188   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:15.714194   49001 cri.go:89] found id: ""
	I0520 11:37:15.714203   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:15.714248   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.718288   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.722201   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:15.722241   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:15.759239   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:15.759260   49001 cri.go:89] found id: ""
	I0520 11:37:15.759271   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:15.759325   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.763284   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:15.763332   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:15.798264   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:15.798287   49001 cri.go:89] found id: ""
	I0520 11:37:15.798297   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:15.798351   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:15.802374   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:15.802435   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:15.838958   49001 cri.go:89] found id: ""
	I0520 11:37:15.838992   49001 logs.go:276] 0 containers: []
	W0520 11:37:15.839004   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:15.839022   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:15.839046   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:37:12.742167   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:12.743228   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:12.743255   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:12.742612   55243 retry.go:31] will retry after 3.334571715s: waiting for machine to come up
	I0520 11:37:16.079189   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:16.079845   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:16.079870   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:16.079798   55243 retry.go:31] will retry after 3.382891723s: waiting for machine to come up
	W0520 11:37:15.925071   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:15.925095   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:15.925110   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:16.011780   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:16.011824   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:16.046849   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:16Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:16Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:16.046873   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:16.046887   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:16.081989   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:16.082018   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:16.116973   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:16.117003   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:16.207084   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:16.207113   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:16.534590   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:16.534632   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:16.549811   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:16.549835   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:16.590252   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:16.590293   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:16.628060   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:16.628094   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:16.679118   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:16.679147   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:16.724724   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:16.724754   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:16.798204   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:16.798235   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:19.342081   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:19.355673   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:19.355730   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:19.393452   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:19.393478   49001 cri.go:89] found id: ""
	I0520 11:37:19.393485   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:19.393531   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.398052   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:19.398112   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:19.438919   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:19.438942   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:19.438948   49001 cri.go:89] found id: ""
	I0520 11:37:19.438957   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:19.439010   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.443824   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.448024   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:19.448075   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:19.492262   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:19.492285   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:19.492291   49001 cri.go:89] found id: ""
	I0520 11:37:19.492299   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:19.492343   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.496462   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.500519   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:19.500564   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:19.535861   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:19.535888   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:19.535894   49001 cri.go:89] found id: ""
	I0520 11:37:19.535901   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:19.535952   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.540188   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.544763   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:19.544822   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:19.581319   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:19.581344   49001 cri.go:89] found id: ""
	I0520 11:37:19.581355   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:19.581414   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.585589   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:19.585654   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:19.622255   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:19.622274   49001 cri.go:89] found id: ""
	I0520 11:37:19.622281   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:19.622326   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:19.626439   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:19.626492   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:19.662188   49001 cri.go:89] found id: ""
	I0520 11:37:19.662217   49001 logs.go:276] 0 containers: []
	W0520 11:37:19.662227   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:19.662247   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:19.662262   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:19.699894   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:19.699922   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:19.734918   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:19.734944   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:19.775359   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:19Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:19Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:19.775378   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:19.775390   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:19.810562   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:19.810587   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:20.113343   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:20.113373   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:20.128320   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:20.128343   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:20.201970   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:20.201987   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:20.201999   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:20.243148   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:20.243175   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:20.286250   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:20.286277   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:20.380349   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:20.380381   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:20.449323   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:20.449356   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:20.490813   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:20.490842   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:20.566142   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:20.566175   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:19.466211   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:19.466699   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:37:19.466741   55177 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:37:19.466659   55243 retry.go:31] will retry after 4.541270951s: waiting for machine to come up
	I0520 11:37:23.103480   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:23.116900   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:23.116994   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:23.151961   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:23.151981   49001 cri.go:89] found id: ""
	I0520 11:37:23.151999   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:23.152055   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.156424   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:23.156480   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:23.189983   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:23.190002   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:23.190007   49001 cri.go:89] found id: ""
	I0520 11:37:23.190013   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:23.190059   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.196223   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.200761   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:23.200820   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:23.237596   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:23.237618   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:23.237622   49001 cri.go:89] found id: ""
	I0520 11:37:23.237628   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:23.237682   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.242013   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.246818   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:23.246876   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:23.289768   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:23.289793   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:23.289797   49001 cri.go:89] found id: ""
	I0520 11:37:23.289803   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:23.289856   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.294246   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.298360   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:23.298412   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:23.335357   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:23.335374   49001 cri.go:89] found id: ""
	I0520 11:37:23.335381   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:23.335424   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.339702   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:23.339765   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:23.375990   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:23.376016   49001 cri.go:89] found id: ""
	I0520 11:37:23.376025   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:23.376076   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:23.380670   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:23.380805   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:23.421857   49001 cri.go:89] found id: ""
	I0520 11:37:23.421879   49001 logs.go:276] 0 containers: []
	W0520 11:37:23.421886   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:23.421900   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:23.421912   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:23.455095   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:23Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:23Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:23.455119   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:23.455130   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:23.502579   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:23.502603   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:23.538776   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:23.538800   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:23.574494   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:23.574516   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:23.883891   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:23.883931   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:23.950164   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:23.950212   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:23.950231   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:23.991907   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:23.991939   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:24.030829   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:24.030861   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:24.112185   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:24.112221   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:24.151754   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:24.151783   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:24.254484   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:24.254517   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:24.270356   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:24.270383   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:24.341080   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:24.341112   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:24.010072   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.010656   55177 main.go:141] libmachine: (no-preload-814599) Found IP for machine: 192.168.39.185
	I0520 11:37:24.010677   55177 main.go:141] libmachine: (no-preload-814599) Reserving static IP address...
	I0520 11:37:24.010691   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has current primary IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.011059   55177 main.go:141] libmachine: (no-preload-814599) DBG | unable to find host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"} in network mk-no-preload-814599
	I0520 11:37:24.085917   55177 main.go:141] libmachine: (no-preload-814599) DBG | Getting to WaitForSSH function...
	I0520 11:37:24.085942   55177 main.go:141] libmachine: (no-preload-814599) Reserved static IP address: 192.168.39.185
	I0520 11:37:24.085956   55177 main.go:141] libmachine: (no-preload-814599) Waiting for SSH to be available...
	I0520 11:37:24.088429   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.088791   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:minikube Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.088818   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.088964   55177 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH client type: external
	I0520 11:37:24.088995   55177 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa (-rw-------)
	I0520 11:37:24.089044   55177 main.go:141] libmachine: (no-preload-814599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:37:24.089068   55177 main.go:141] libmachine: (no-preload-814599) DBG | About to run SSH command:
	I0520 11:37:24.089085   55177 main.go:141] libmachine: (no-preload-814599) DBG | exit 0
	I0520 11:37:24.221040   55177 main.go:141] libmachine: (no-preload-814599) DBG | SSH cmd err, output: <nil>: 
	I0520 11:37:24.221350   55177 main.go:141] libmachine: (no-preload-814599) KVM machine creation complete!
	I0520 11:37:24.221627   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetConfigRaw
	I0520 11:37:24.222147   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:24.222324   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:24.222459   55177 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 11:37:24.222471   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:37:24.223825   55177 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 11:37:24.223840   55177 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 11:37:24.223845   55177 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 11:37:24.223870   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:24.226250   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.226582   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.226601   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.226724   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:24.226919   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.227095   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.227254   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:24.227432   55177 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:24.227633   55177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:37:24.227644   55177 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 11:37:24.340138   55177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:37:24.340175   55177 main.go:141] libmachine: Detecting the provisioner...
	I0520 11:37:24.340186   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:24.343303   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.343712   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.343746   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.343946   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:24.344140   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.344305   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.344456   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:24.344649   55177 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:24.344851   55177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:37:24.344864   55177 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 11:37:24.457900   55177 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 11:37:24.457989   55177 main.go:141] libmachine: found compatible host: buildroot
	I0520 11:37:24.458003   55177 main.go:141] libmachine: Provisioning with buildroot...
	I0520 11:37:24.458014   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:37:24.458280   55177 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:37:24.458305   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:37:24.458497   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:24.461144   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.461457   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.461481   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.461630   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:24.461774   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.461900   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.462049   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:24.462189   55177 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:24.462354   55177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:37:24.462366   55177 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-814599 && echo "no-preload-814599" | sudo tee /etc/hostname
	I0520 11:37:24.587986   55177 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-814599
	
	I0520 11:37:24.588014   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:24.590785   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.591209   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.591246   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.591429   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:24.591606   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.591761   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.591885   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:24.592097   55177 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:24.592338   55177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:37:24.592364   55177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-814599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-814599/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-814599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:37:24.714008   55177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:37:24.714043   55177 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:37:24.714099   55177 buildroot.go:174] setting up certificates
	I0520 11:37:24.714114   55177 provision.go:84] configureAuth start
	I0520 11:37:24.714130   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:37:24.714411   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:37:24.716917   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.717360   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.717390   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.717511   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:24.719673   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.719999   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.720020   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.720202   55177 provision.go:143] copyHostCerts
	I0520 11:37:24.720260   55177 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:37:24.720274   55177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:37:24.720335   55177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:37:24.720442   55177 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:37:24.720452   55177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:37:24.720483   55177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:37:24.720553   55177 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:37:24.720563   55177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:37:24.720593   55177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:37:24.720661   55177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.no-preload-814599 san=[127.0.0.1 192.168.39.185 localhost minikube no-preload-814599]
	I0520 11:37:24.953337   55177 provision.go:177] copyRemoteCerts
	I0520 11:37:24.953397   55177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:37:24.953421   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:24.956029   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.956354   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:24.956385   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:24.956498   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:24.956687   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:24.956852   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:24.956973   55177 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:37:25.043540   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:37:25.068520   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0520 11:37:25.092950   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:37:25.118422   55177 provision.go:87] duration metric: took 404.291795ms to configureAuth
	I0520 11:37:25.118446   55177 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:37:25.118636   55177 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:37:25.118715   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:25.121131   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.121531   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.121561   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.121698   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:25.121891   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.122044   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.122181   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:25.122308   55177 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:25.122513   55177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:37:25.122530   55177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:37:25.396003   55177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:37:25.396034   55177 main.go:141] libmachine: Checking connection to Docker...
	I0520 11:37:25.396046   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetURL
	I0520 11:37:25.397414   55177 main.go:141] libmachine: (no-preload-814599) DBG | Using libvirt version 6000000
	I0520 11:37:25.399913   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.400326   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.400353   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.400501   55177 main.go:141] libmachine: Docker is up and running!
	I0520 11:37:25.400519   55177 main.go:141] libmachine: Reticulating splines...
	I0520 11:37:25.400526   55177 client.go:171] duration metric: took 25.079325967s to LocalClient.Create
	I0520 11:37:25.400550   55177 start.go:167] duration metric: took 25.079391411s to libmachine.API.Create "no-preload-814599"
	I0520 11:37:25.400560   55177 start.go:293] postStartSetup for "no-preload-814599" (driver="kvm2")
	I0520 11:37:25.400571   55177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:37:25.400586   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:25.400801   55177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:37:25.400824   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:25.403000   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.403301   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.403335   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.403481   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:25.403645   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.403811   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:25.403941   55177 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:37:25.491378   55177 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:37:25.495807   55177 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:37:25.495824   55177 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:37:25.495885   55177 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:37:25.495952   55177 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:37:25.496037   55177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:37:25.505374   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:37:25.529496   55177 start.go:296] duration metric: took 128.923149ms for postStartSetup
	I0520 11:37:25.529546   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetConfigRaw
	I0520 11:37:25.530242   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:37:25.532725   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.533176   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.533200   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.533439   55177 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json ...
	I0520 11:37:25.533628   55177 start.go:128] duration metric: took 25.239343412s to createHost
	I0520 11:37:25.533656   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:25.535937   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.536315   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.536340   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.536502   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:25.536692   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.536855   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.536987   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:25.537165   55177 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:25.537315   55177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:37:25.537329   55177 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:37:25.651171   55177 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205045.624370486
	
	I0520 11:37:25.651195   55177 fix.go:216] guest clock: 1716205045.624370486
	I0520 11:37:25.651205   55177 fix.go:229] Guest: 2024-05-20 11:37:25.624370486 +0000 UTC Remote: 2024-05-20 11:37:25.533640621 +0000 UTC m=+27.826588786 (delta=90.729865ms)
	I0520 11:37:25.651230   55177 fix.go:200] guest clock delta is within tolerance: 90.729865ms
	I0520 11:37:25.651237   55177 start.go:83] releasing machines lock for "no-preload-814599", held for 25.357113439s
	I0520 11:37:25.651262   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:25.651516   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:37:25.654171   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.654501   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.654531   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.654678   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:25.655149   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:25.655308   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:37:25.655385   55177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:37:25.655435   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:25.655512   55177 ssh_runner.go:195] Run: cat /version.json
	I0520 11:37:25.655535   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:37:25.657897   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.658222   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.658245   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.658262   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.658394   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:25.658571   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.658728   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:25.658736   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:25.658764   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:25.658920   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:37:25.658921   55177 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:37:25.659067   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:37:25.659212   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:37:25.659389   55177 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	W0520 11:37:25.763092   55177 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:37:25.763157   55177 ssh_runner.go:195] Run: systemctl --version
	I0520 11:37:25.769613   55177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:37:25.930658   55177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:37:25.936855   55177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:37:25.936942   55177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:37:25.953422   55177 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:37:25.953445   55177 start.go:494] detecting cgroup driver to use...
	I0520 11:37:25.953563   55177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:37:25.971466   55177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:37:25.984671   55177 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:37:25.984742   55177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:37:25.998074   55177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:37:26.011500   55177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:37:26.126412   55177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:37:26.280949   55177 docker.go:233] disabling docker service ...
	I0520 11:37:26.281031   55177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:37:26.295029   55177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:37:26.307980   55177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:37:26.422857   55177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:37:26.541328   55177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:37:26.556143   55177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:37:26.574739   55177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:37:26.574794   55177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.584821   55177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:37:26.584874   55177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.594783   55177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.604910   55177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.614901   55177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:37:26.625141   55177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.634960   55177 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.651910   55177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:26.661857   55177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:37:26.671034   55177 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:37:26.671101   55177 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:37:26.683517   55177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:37:26.692709   55177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:37:26.800692   55177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:37:26.941265   55177 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:37:26.941346   55177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:37:26.946971   55177 start.go:562] Will wait 60s for crictl version
	I0520 11:37:26.947023   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:26.951327   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:37:26.992538   55177 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:37:26.992627   55177 ssh_runner.go:195] Run: crio --version
	I0520 11:37:27.025584   55177 ssh_runner.go:195] Run: crio --version
	I0520 11:37:27.057900   55177 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:37:27.059089   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:37:27.062968   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:27.063491   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:37:27.063522   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:37:27.063790   55177 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 11:37:27.068170   55177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:37:27.083225   55177 kubeadm.go:877] updating cluster {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:37:27.083395   55177 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:37:27.083456   55177 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:37:27.129466   55177 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:37:27.129494   55177 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:37:27.129557   55177 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:27.129843   55177 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:37:27.129863   55177 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:37:27.129849   55177 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 11:37:27.129934   55177 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:37:27.130002   55177 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:37:27.130039   55177 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:37:27.130008   55177 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:37:27.131103   55177 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:27.131552   55177 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 11:37:27.131786   55177 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:37:27.131923   55177 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:37:27.131944   55177 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:37:27.132036   55177 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:37:27.132092   55177 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:37:27.132634   55177 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:37:27.246628   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:37:27.251235   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:37:27.263946   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:37:27.264260   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0520 11:37:27.271060   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:37:27.282205   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:37:27.283651   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0520 11:37:27.367229   55177 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0520 11:37:27.367284   55177 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:37:27.367339   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.373568   55177 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0520 11:37:27.373620   55177 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:37:27.373670   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.440049   55177 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0520 11:37:27.440097   55177 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:37:27.440145   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.457705   55177 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I0520 11:37:27.457750   55177 cri.go:218] Removing image: registry.k8s.io/pause:3.9
	I0520 11:37:27.457795   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.463903   55177 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0520 11:37:27.463943   55177 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:37:27.463978   55177 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0520 11:37:27.463989   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.464010   55177 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:37:27.464024   55177 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0520 11:37:27.464055   55177 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:37:27.464065   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:37:27.464097   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:37:27.464099   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:37:27.464112   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.464056   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.464143   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.9
	I0520 11:37:27.540796   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:37:27.556746   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 11:37:27.556886   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:37:27.564187   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:37:27.564281   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0520 11:37:27.564350   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 11:37:27.564364   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I0520 11:37:27.564374   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 11:37:27.564429   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:37:27.564435   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:37:27.564434   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0520 11:37:27.613351   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0520 11:37:27.613394   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I0520 11:37:27.613507   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 11:37:27.613591   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:37:27.672183   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 11:37:27.672205   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I0520 11:37:27.672250   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
	I0520 11:37:27.672300   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.30.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.30.1': No such file or directory
	I0520 11:37:27.672305   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:37:27.672325   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 --> /var/lib/minikube/images/kube-apiserver_v1.30.1 (32776704 bytes)
	I0520 11:37:27.672355   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0520 11:37:27.672406   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.30.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.30.1': No such file or directory
	I0520 11:37:27.672418   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:37:27.672426   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.30.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.30.1': No such file or directory
	I0520 11:37:27.672433   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 --> /var/lib/minikube/images/kube-controller-manager_v1.30.1 (31146496 bytes)
	I0520 11:37:27.672447   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 --> /var/lib/minikube/images/kube-scheduler_v1.30.1 (19324928 bytes)
	I0520 11:37:27.718971   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.30.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.30.1': No such file or directory
	I0520 11:37:27.719011   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 --> /var/lib/minikube/images/kube-proxy_v1.30.1 (29022720 bytes)
	I0520 11:37:27.726500   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.12-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.12-0': No such file or directory
	I0520 11:37:27.726539   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 --> /var/lib/minikube/images/etcd_3.5.12-0 (57244160 bytes)
	I0520 11:37:26.887907   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:26.902251   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:26.902325   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:26.945102   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:26.945124   49001 cri.go:89] found id: ""
	I0520 11:37:26.945133   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:26.945197   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:26.949903   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:26.949960   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:26.988144   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:26.988167   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:26.988173   49001 cri.go:89] found id: ""
	I0520 11:37:26.988180   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:26.988238   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:26.992745   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:26.998251   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:26.998324   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:27.040952   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:27.040980   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:27.040987   49001 cri.go:89] found id: ""
	I0520 11:37:27.040996   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:27.041057   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.045674   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.049831   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:27.049903   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:27.099343   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:27.099366   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:27.099372   49001 cri.go:89] found id: ""
	I0520 11:37:27.099380   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:27.099432   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.104037   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.109906   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:27.109987   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:27.154017   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:27.154035   49001 cri.go:89] found id: ""
	I0520 11:37:27.154044   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:27.154105   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.158162   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:27.158221   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:27.195164   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:27.195182   49001 cri.go:89] found id: ""
	I0520 11:37:27.195188   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:27.195233   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:27.199280   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:27.199327   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:27.236823   49001 cri.go:89] found id: ""
	I0520 11:37:27.236846   49001 logs.go:276] 0 containers: []
	W0520 11:37:27.236858   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:27.236870   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:27.236882   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:27.314806   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:27.314851   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:27.363216   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:27.363242   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:27.402395   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:27.402420   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:27.729526   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:27.729553   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:27.767055   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:27.767082   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:27.882099   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:27.882129   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:27.899639   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:27.899668   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:27.958550   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:27.958646   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:28.061707   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:28.061748   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:28.116585   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:28.116622   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:28.211728   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:28.211753   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:28.211803   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:28.266215   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:28Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:28Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:28.266243   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:28.266259   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:28.306859   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:28.306886   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:30.857904   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:30.873289   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:30.873357   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:27.779068   55177 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.9
	I0520 11:37:27.779137   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.9
	I0520 11:37:28.046088   55177 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:28.575621   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
	I0520 11:37:28.575669   55177 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:37:28.575675   55177 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0520 11:37:28.575707   55177 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:28.575720   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:37:28.575742   55177 ssh_runner.go:195] Run: which crictl
	I0520 11:37:30.551513   55177 ssh_runner.go:235] Completed: which crictl: (1.975744229s)
	I0520 11:37:30.551553   55177 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.975816608s)
	I0520 11:37:30.551568   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0520 11:37:30.551591   55177 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:30.551604   55177 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:37:30.551663   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:37:30.598355   55177 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 11:37:30.598457   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:37:30.908947   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:30.950363   49001 cri.go:89] found id: ""
	I0520 11:37:30.950387   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:30.950450   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:30.955960   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:30.956043   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:30.992343   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:30.992372   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:30.992378   49001 cri.go:89] found id: ""
	I0520 11:37:30.992389   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:30.992447   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:30.996956   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.000763   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:31.000823   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:31.042185   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:31.042209   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:31.042215   49001 cri.go:89] found id: ""
	I0520 11:37:31.042223   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:31.042274   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.046245   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.050282   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:31.050353   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:31.089038   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:31.089073   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:31.089079   49001 cri.go:89] found id: ""
	I0520 11:37:31.089088   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:31.089143   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.093258   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.097390   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:31.097456   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:31.140174   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:31.140201   49001 cri.go:89] found id: ""
	I0520 11:37:31.140210   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:31.140266   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.144917   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:31.145004   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:31.180759   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:31.180777   49001 cri.go:89] found id: ""
	I0520 11:37:31.180784   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:31.180833   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:31.184994   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:31.185048   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:31.223192   49001 cri.go:89] found id: ""
	I0520 11:37:31.223213   49001 logs.go:276] 0 containers: []
	W0520 11:37:31.223220   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:31.223233   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:31.223257   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:31.266593   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:31.266624   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:31.342228   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:31.342247   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:31.342259   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:31.387893   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:31.387927   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:31.427405   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:31.427434   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:31.746483   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:31.746525   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:31.782031   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:31Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:31Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:31.782050   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:31.782062   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:31.820977   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:31.821005   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:31.913236   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:31.913276   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:31.980608   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:31.980648   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:32.060069   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:32.060104   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:32.075967   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:32.076008   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:32.128455   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:32.128483   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:32.183253   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:32.183283   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:34.729497   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:34.746609   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:34.746725   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:34.785914   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:34.785937   49001 cri.go:89] found id: ""
	I0520 11:37:34.785946   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:34.786002   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.790445   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:34.790512   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:34.829697   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:34.829722   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:34.829727   49001 cri.go:89] found id: ""
	I0520 11:37:34.829736   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:34.829790   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.835328   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.839270   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:34.839319   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:34.875153   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:34.875178   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:34.875185   49001 cri.go:89] found id: ""
	I0520 11:37:34.875192   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:34.875247   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.879788   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.884086   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:34.884141   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:34.922741   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:34.922767   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:34.922772   49001 cri.go:89] found id: ""
	I0520 11:37:34.922781   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:34.922834   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.927043   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.931011   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:34.931062   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:34.965434   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:34.965454   49001 cri.go:89] found id: ""
	I0520 11:37:34.965462   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:34.965515   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:34.969567   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:34.969629   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:35.008101   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:35.008125   49001 cri.go:89] found id: ""
	I0520 11:37:35.008133   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:35.008193   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:35.012552   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:35.012612   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:35.050538   49001 cri.go:89] found id: ""
	I0520 11:37:35.050572   49001 logs.go:276] 0 containers: []
	W0520 11:37:35.050584   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:35.050603   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:35.050616   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:35.390893   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:35.390930   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:35.433798   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:35.433835   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:35.514603   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:35.514631   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:35.514646   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:35.577338   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:35.577374   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:35.658557   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:35.658593   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:35.699934   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:35.699968   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:35.745328   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:35.745355   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:35.762235   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:35.762260   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:35.857150   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:35.857195   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:33.445961   55177 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (2.894267855s)
	I0520 11:37:33.445986   55177 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.847502572s)
	I0520 11:37:33.445998   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0520 11:37:33.446025   55177 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:37:33.446059   55177 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0520 11:37:33.446080   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:37:33.446107   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0520 11:37:35.729539   55177 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (2.283433909s)
	I0520 11:37:35.729568   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0520 11:37:35.729599   55177 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:37:35.729650   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:37:37.710270   55177 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.980593085s)
	I0520 11:37:37.710299   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0520 11:37:37.710323   55177 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:37:37.710373   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:37:35.905028   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:35.905069   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:35.941942   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:35Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:35Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:35.941975   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:35.941990   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:35.979328   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:35.979357   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:36.020549   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:36.020577   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:38.573432   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:38.587595   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:38.587679   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:38.630679   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:38.630705   49001 cri.go:89] found id: ""
	I0520 11:37:38.630713   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:38.630769   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.636007   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:38.636114   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:38.674472   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:38.674500   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:38.674507   49001 cri.go:89] found id: ""
	I0520 11:37:38.674516   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:38.674576   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.679092   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.683085   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:38.683139   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:38.731146   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:38.731170   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:38.731176   49001 cri.go:89] found id: ""
	I0520 11:37:38.731183   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:38.731242   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.735726   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.740093   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:38.740157   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:38.787213   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:38.787242   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:38.787248   49001 cri.go:89] found id: ""
	I0520 11:37:38.787257   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:38.787315   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.791653   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.795708   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:38.795779   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:38.834309   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:38.834338   49001 cri.go:89] found id: ""
	I0520 11:37:38.834347   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:38.834403   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.839590   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:38.839651   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:38.880056   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:38.880078   49001 cri.go:89] found id: ""
	I0520 11:37:38.880085   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:38.880137   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:38.884462   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:38.884517   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:38.925598   49001 cri.go:89] found id: ""
	I0520 11:37:38.925620   49001 logs.go:276] 0 containers: []
	W0520 11:37:38.925629   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:38.925646   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:38.925662   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:38.967708   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:38.967742   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:39.004493   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:39.004528   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:39.043895   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:39.043926   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:39.089707   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:39.089738   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:39.142616   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:39.142647   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:39.243147   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:39.243180   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:39.293839   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:39.293876   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:39.329260   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:39Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:39Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:39.329282   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:39.329293   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:39.371235   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:39.371263   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:39.702279   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:39.702317   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:39.719489   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:39.719529   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:39.790443   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:39.790460   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:39.790472   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:39.864449   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:39.864483   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:40.074777   55177 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.364379197s)
	I0520 11:37:40.074800   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0520 11:37:40.074840   55177 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:37:40.074900   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:37:42.445256   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:42.459489   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:42.459560   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:42.505468   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:42.505491   49001 cri.go:89] found id: ""
	I0520 11:37:42.505500   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:42.505556   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.510073   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:42.510146   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:42.547578   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:42.547603   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:42.547608   49001 cri.go:89] found id: ""
	I0520 11:37:42.547616   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:42.547674   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.552444   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.557566   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:42.557635   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:42.603953   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:42.603977   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:42.603982   49001 cri.go:89] found id: ""
	I0520 11:37:42.603990   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:42.604045   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.609148   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.614267   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:42.614323   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:42.650755   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:42.650784   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:42.650790   49001 cri.go:89] found id: ""
	I0520 11:37:42.650804   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:42.650860   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.655292   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.659414   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:42.659490   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:42.701066   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:42.701089   49001 cri.go:89] found id: ""
	I0520 11:37:42.701098   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:42.701151   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.705572   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:42.705622   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:42.748408   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:42.748439   49001 cri.go:89] found id: ""
	I0520 11:37:42.748449   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:42.748504   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:42.753064   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:42.753138   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:42.793841   49001 cri.go:89] found id: ""
	I0520 11:37:42.793867   49001 logs.go:276] 0 containers: []
	W0520 11:37:42.793879   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:42.793896   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:42.793911   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:42.868634   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:42.868668   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:42.909759   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:42Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:42Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:42.909786   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:42.909800   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:42.957418   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:42.957450   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:43.032601   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:43.032630   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:43.032645   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:43.070087   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:43.070114   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:43.145890   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:43.145924   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:43.182905   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:43.182933   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:43.524243   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:43.524285   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:43.615647   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:43.615677   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:43.655032   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:43.655058   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:43.695013   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:43.695040   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:43.742525   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:43.742567   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:43.793492   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:43.793536   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:44.182697   55177 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.107772456s)
	I0520 11:37:44.182726   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0520 11:37:44.182751   55177 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:37:44.182803   55177 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:37:44.828689   55177 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 11:37:44.828729   55177 cache_images.go:123] Successfully loaded all cached images
	I0520 11:37:44.828736   55177 cache_images.go:92] duration metric: took 17.699229187s to LoadCachedImages
	I0520 11:37:44.828760   55177 kubeadm.go:928] updating node { 192.168.39.185 8443 v1.30.1 crio true true} ...
	I0520 11:37:44.828899   55177 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-814599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:37:44.828991   55177 ssh_runner.go:195] Run: crio config
	I0520 11:37:44.874986   55177 cni.go:84] Creating CNI manager for ""
	I0520 11:37:44.875006   55177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:37:44.875023   55177 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:37:44.875047   55177 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-814599 NodeName:no-preload-814599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:37:44.875178   55177 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-814599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:37:44.875243   55177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:37:44.884921   55177 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0520 11:37:44.884993   55177 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0520 11:37:44.894194   55177 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0520 11:37:44.894246   55177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:37:44.894261   55177 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0520 11:37:44.894340   55177 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0520 11:37:44.894349   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0520 11:37:44.894399   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0520 11:37:44.911262   55177 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0520 11:37:44.911298   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0520 11:37:44.911403   55177 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0520 11:37:44.911405   55177 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0520 11:37:44.911439   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0520 11:37:44.927806   55177 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0520 11:37:44.927842   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0520 11:37:45.687562   55177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:37:45.698455   55177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 11:37:45.715540   55177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:37:45.732096   55177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0520 11:37:45.749226   55177 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0520 11:37:45.753251   55177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:37:45.765783   55177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:37:45.884722   55177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:37:45.901158   55177 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599 for IP: 192.168.39.185
	I0520 11:37:45.901184   55177 certs.go:194] generating shared ca certs ...
	I0520 11:37:45.901206   55177 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:45.901399   55177 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:37:45.901458   55177 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:37:45.901474   55177 certs.go:256] generating profile certs ...
	I0520 11:37:45.901537   55177 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.key
	I0520 11:37:45.901550   55177 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt with IP's: []
	I0520 11:37:45.963732   55177 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt ...
	I0520 11:37:45.963779   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: {Name:mk0cb8897bf8302b52dc7ca5046d539a70405121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:45.963960   55177 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.key ...
	I0520 11:37:45.963971   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.key: {Name:mkc542e91fa2ebd4a233ff7fd999cbdcec3b969d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:45.964047   55177 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5
	I0520 11:37:45.964063   55177 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt.d1d23eb5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.185]
	I0520 11:37:46.111803   55177 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt.d1d23eb5 ...
	I0520 11:37:46.111832   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt.d1d23eb5: {Name:mk03571f628d50acb43d58fb9e6f8c79750d1719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:46.111987   55177 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5 ...
	I0520 11:37:46.112000   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5: {Name:mk134aa0713e2931956f241e87d95af2ee2ed3cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:46.112067   55177 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt.d1d23eb5 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt
	I0520 11:37:46.112166   55177 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key
	I0520 11:37:46.112228   55177 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key
	I0520 11:37:46.112245   55177 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt with IP's: []
	I0520 11:37:46.549891   55177 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt ...
	I0520 11:37:46.549916   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt: {Name:mk8ba6460ca87a91db3736a19ac0b4aa263186c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:46.550108   55177 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key ...
	I0520 11:37:46.550128   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key: {Name:mkb0abd6decbc99a654d27267c113d1ab1d3de2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:46.550328   55177 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:37:46.550364   55177 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:37:46.550375   55177 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:37:46.550397   55177 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:37:46.550420   55177 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:37:46.550442   55177 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:37:46.550477   55177 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:37:46.551044   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:37:46.585010   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:37:46.628269   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:37:46.660505   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:37:46.691056   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 11:37:46.717801   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:37:46.745010   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:37:46.771581   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:37:46.797609   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:37:46.824080   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:37:46.850011   55177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:37:46.876333   55177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:37:46.897181   55177 ssh_runner.go:195] Run: openssl version
	I0520 11:37:46.905468   55177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:37:46.918991   55177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:37:46.925511   55177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:37:46.925582   55177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:37:46.934013   55177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:37:46.948980   55177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:37:46.964110   55177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:37:46.970166   55177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:37:46.970217   55177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:37:46.976331   55177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:37:46.991532   55177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:37:47.005423   55177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:47.010919   55177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:47.010974   55177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:47.018789   55177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:37:47.030467   55177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:37:47.034911   55177 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 11:37:47.034975   55177 kubeadm.go:391] StartCluster: {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:37:47.035068   55177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:37:47.035119   55177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:37:47.080886   55177 cri.go:89] found id: ""
	I0520 11:37:47.080977   55177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 11:37:47.091749   55177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:37:47.102131   55177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:37:47.112035   55177 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:37:47.112054   55177 kubeadm.go:156] found existing configuration files:
	
	I0520 11:37:47.112094   55177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:37:47.121196   55177 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:37:47.121234   55177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:37:47.130723   55177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:37:47.139592   55177 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:37:47.139630   55177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:37:47.149295   55177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:37:47.158474   55177 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:37:47.158518   55177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:37:47.168002   55177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:37:47.179568   55177 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:37:47.179614   55177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:37:47.191617   55177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:37:47.255028   55177 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 11:37:47.255305   55177 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:37:47.416469   55177 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:37:47.416613   55177 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:37:47.416736   55177 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:37:47.650205   55177 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:37:47.652566   55177 out.go:204]   - Generating certificates and keys ...
	I0520 11:37:47.652662   55177 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:37:47.652761   55177 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:37:47.718358   55177 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 11:37:46.309055   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:46.322358   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:46.322414   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:46.361895   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:46.361917   49001 cri.go:89] found id: ""
	I0520 11:37:46.361926   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:46.361982   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.366364   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:46.366430   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:46.407171   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:46.407195   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:46.407201   49001 cri.go:89] found id: ""
	I0520 11:37:46.407209   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:46.407268   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.411308   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.414901   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:46.414961   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:46.449327   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:46.449354   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:46.449360   49001 cri.go:89] found id: ""
	I0520 11:37:46.449367   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:46.449420   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.453472   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.457223   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:46.457279   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:46.491554   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:46.491576   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:46.491581   49001 cri.go:89] found id: ""
	I0520 11:37:46.491590   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:46.491642   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.495599   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.499467   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:46.499525   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:46.533561   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:46.533582   49001 cri.go:89] found id: ""
	I0520 11:37:46.533591   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:46.533642   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.537534   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:46.537584   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:46.576563   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:46.576585   49001 cri.go:89] found id: ""
	I0520 11:37:46.576592   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:46.576635   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:46.582100   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:46.582156   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:46.620799   49001 cri.go:89] found id: ""
	I0520 11:37:46.620841   49001 logs.go:276] 0 containers: []
	W0520 11:37:46.620853   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:46.620871   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:46.620888   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:46.703641   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:46.703668   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:46.703679   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:46.748549   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:46.748583   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:46.784693   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:46Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:46Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:46.784717   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:46.784733   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:46.864481   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:46.864510   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:46.904736   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:46.904770   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:47.252609   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:47.252642   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:47.380977   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:47.381012   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:47.399130   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:47.399164   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:47.439506   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:47.439539   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:47.486345   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:47.486374   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:47.527931   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:47.527958   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:47.569521   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:47.569555   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:47.635780   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:47.635814   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:50.179381   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:50.193636   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:50.193693   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:50.232908   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:50.232952   49001 cri.go:89] found id: ""
	I0520 11:37:50.232961   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:50.233017   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.237261   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:50.237315   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:50.284220   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:50.284243   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:50.284249   49001 cri.go:89] found id: ""
	I0520 11:37:50.284258   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:50.284318   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.288607   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.292550   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:50.292601   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:50.335296   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:50.335318   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:50.335324   49001 cri.go:89] found id: ""
	I0520 11:37:50.335331   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:50.335388   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.339529   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.343452   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:50.343512   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:50.382033   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:50.382055   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:50.382069   49001 cri.go:89] found id: ""
	I0520 11:37:50.382077   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:50.382132   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.386218   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.390003   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:50.390063   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:50.430907   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:50.430931   49001 cri.go:89] found id: ""
	I0520 11:37:50.430943   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:50.430997   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.435375   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:50.435447   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:50.479010   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:50.479037   49001 cri.go:89] found id: ""
	I0520 11:37:50.479052   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:50.479114   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:50.483423   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:50.483497   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:50.524866   49001 cri.go:89] found id: ""
	I0520 11:37:50.524890   49001 logs.go:276] 0 containers: []
	W0520 11:37:50.524900   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:50.524917   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:50.524943   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:50.539631   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:50.539660   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:50.576035   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:50.576072   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:50.608587   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:50.608612   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:50.645338   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:50Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:50Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:50.645357   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:50.645369   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:50.738685   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:50.738718   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:48.006405   55177 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 11:37:48.441893   55177 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 11:37:48.539041   55177 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 11:37:48.662501   55177 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 11:37:48.662876   55177 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-814599] and IPs [192.168.39.185 127.0.0.1 ::1]
	I0520 11:37:48.806531   55177 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 11:37:48.806692   55177 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-814599] and IPs [192.168.39.185 127.0.0.1 ::1]
	I0520 11:37:48.864031   55177 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 11:37:49.122739   55177 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 11:37:49.274395   55177 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 11:37:49.274523   55177 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:37:49.766189   55177 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:37:49.981913   55177 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 11:37:50.714947   55177 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:37:50.974451   55177 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:37:51.258349   55177 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:37:51.259230   55177 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:37:51.262545   55177 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:37:51.286615   55177 out.go:204]   - Booting up control plane ...
	I0520 11:37:51.286763   55177 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:37:51.286893   55177 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:37:51.287001   55177 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:37:51.287179   55177 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:37:51.287292   55177 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:37:51.287355   55177 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:37:51.434555   55177 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 11:37:51.434684   55177 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 11:37:51.936163   55177 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.795085ms
	I0520 11:37:51.936240   55177 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 11:37:51.088429   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:51.088475   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:51.131476   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:51.131512   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:51.201646   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:51.201668   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:51.201680   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:51.238758   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:51.238790   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:51.337664   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:51.337706   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:51.405571   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:51.405601   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:51.450727   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:51.450756   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:51.501451   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:51.501478   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:54.046167   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:54.063445   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:54.063518   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:54.100590   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:54.100612   49001 cri.go:89] found id: ""
	I0520 11:37:54.100622   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:54.100681   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.105187   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:54.105255   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:54.141556   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:54.141579   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:54.141583   49001 cri.go:89] found id: ""
	I0520 11:37:54.141590   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:54.141645   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.145756   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.149760   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:54.149822   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:54.184636   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:54.184669   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:54.184675   49001 cri.go:89] found id: ""
	I0520 11:37:54.184684   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:54.184744   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.188820   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.192631   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:54.192703   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:54.228234   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:54.228260   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:54.228266   49001 cri.go:89] found id: ""
	I0520 11:37:54.228274   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:54.228328   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.232901   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.237012   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:54.237079   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:54.277191   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:54.277214   49001 cri.go:89] found id: ""
	I0520 11:37:54.277223   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:54.277276   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.281629   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:54.281711   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:54.325617   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:54.325642   49001 cri.go:89] found id: ""
	I0520 11:37:54.325652   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:54.325705   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:54.330858   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:54.330928   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:54.372269   49001 cri.go:89] found id: ""
	I0520 11:37:54.372301   49001 logs.go:276] 0 containers: []
	W0520 11:37:54.372311   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:54.372328   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:54.372345   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:54.452222   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:54.452255   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:54.492984   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:54.493012   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:54.827685   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:54.827732   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:54.934080   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:54.934116   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:55.000134   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:55.000162   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:55.000174   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:55.039134   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:55.039162   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:55.081373   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:55.081414   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:55.116827   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:55.116861   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:55.155511   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:55.155539   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:55.190401   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:55Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:55Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:55.190429   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:55.190444   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:55.205815   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:55.205855   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:55.281362   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:55.281394   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:55.321027   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:55.321062   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:57.437144   55177 kubeadm.go:309] [api-check] The API server is healthy after 5.501950158s
	I0520 11:37:57.449888   55177 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 11:37:57.467181   55177 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 11:37:57.489712   55177 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 11:37:57.489928   55177 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-814599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 11:37:57.502473   55177 kubeadm.go:309] [bootstrap-token] Using token: txomnq.xx9lvay4m5g9anm2
	I0520 11:37:57.504043   55177 out.go:204]   - Configuring RBAC rules ...
	I0520 11:37:57.504207   55177 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 11:37:57.512460   55177 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 11:37:57.519408   55177 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 11:37:57.522560   55177 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 11:37:57.525953   55177 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 11:37:57.529180   55177 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 11:37:57.843667   55177 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 11:37:58.286964   55177 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 11:37:58.844540   55177 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 11:37:58.845755   55177 kubeadm.go:309] 
	I0520 11:37:58.845844   55177 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 11:37:58.845855   55177 kubeadm.go:309] 
	I0520 11:37:58.845990   55177 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 11:37:58.845999   55177 kubeadm.go:309] 
	I0520 11:37:58.846036   55177 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 11:37:58.846123   55177 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 11:37:58.846217   55177 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 11:37:58.846239   55177 kubeadm.go:309] 
	I0520 11:37:58.846326   55177 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 11:37:58.846336   55177 kubeadm.go:309] 
	I0520 11:37:58.846408   55177 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 11:37:58.846429   55177 kubeadm.go:309] 
	I0520 11:37:58.846518   55177 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 11:37:58.846619   55177 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 11:37:58.846711   55177 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 11:37:58.846720   55177 kubeadm.go:309] 
	I0520 11:37:58.846819   55177 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 11:37:58.846936   55177 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 11:37:58.846947   55177 kubeadm.go:309] 
	I0520 11:37:58.847042   55177 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token txomnq.xx9lvay4m5g9anm2 \
	I0520 11:37:58.847175   55177 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 11:37:58.847207   55177 kubeadm.go:309] 	--control-plane 
	I0520 11:37:58.847217   55177 kubeadm.go:309] 
	I0520 11:37:58.847324   55177 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 11:37:58.847335   55177 kubeadm.go:309] 
	I0520 11:37:58.847432   55177 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token txomnq.xx9lvay4m5g9anm2 \
	I0520 11:37:58.847545   55177 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 11:37:58.847994   55177 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:37:58.848101   55177 cni.go:84] Creating CNI manager for ""
	I0520 11:37:58.848116   55177 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:37:58.849751   55177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:37:57.875532   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:37:57.890122   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:37:57.890194   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:37:57.929792   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:57.929823   49001 cri.go:89] found id: ""
	I0520 11:37:57.929834   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:37:57.929912   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:57.934207   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:37:57.934271   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:37:57.976109   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:57.976134   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:57.976139   49001 cri.go:89] found id: ""
	I0520 11:37:57.976148   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:37:57.976206   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:57.980680   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:57.985469   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:37:57.985515   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:37:58.029316   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:58.029344   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:37:58.029349   49001 cri.go:89] found id: ""
	I0520 11:37:58.029357   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:37:58.029414   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:58.033846   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:58.038670   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:37:58.038728   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:37:58.081009   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:58.081031   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:58.081036   49001 cri.go:89] found id: ""
	I0520 11:37:58.081044   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:37:58.081107   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:58.085879   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:58.090280   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:37:58.090345   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:37:58.134174   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:58.134200   49001 cri.go:89] found id: ""
	I0520 11:37:58.134211   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:37:58.134270   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:58.138842   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:37:58.138920   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:37:58.181832   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:58.181853   49001 cri.go:89] found id: ""
	I0520 11:37:58.181860   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:37:58.181904   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:37:58.186287   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:37:58.186356   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:37:58.225130   49001 cri.go:89] found id: ""
	I0520 11:37:58.225160   49001 logs.go:276] 0 containers: []
	W0520 11:37:58.225171   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:37:58.225185   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:37:58.225202   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:37:58.239115   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:37:58.239151   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:37:58.293415   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:37:58.293447   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:37:58.345194   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:37:58.345219   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:37:58.461100   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:37:58.461134   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:37:58.530268   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:37:58.530292   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:37:58.530307   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:37:58.579308   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:37:58.579334   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:37:58.616046   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:37:58Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:37:58Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:37:58.616063   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:37:58.616076   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:37:58.936534   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:37:58.936565   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:37:58.982759   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:37:58.982785   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:37:59.027540   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:37:59.027568   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:37:59.094665   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:37:59.094692   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:37:59.171686   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:37:59.171716   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:37:59.211960   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:37:59.211991   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:37:58.850979   55177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:37:58.863166   55177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:37:58.887505   55177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:37:58.887640   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:37:58.887640   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-814599 minikube.k8s.io/updated_at=2024_05_20T11_37_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=no-preload-814599 minikube.k8s.io/primary=true
	I0520 11:37:59.072977   55177 ops.go:34] apiserver oom_adj: -16
	I0520 11:37:59.073140   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:37:59.574135   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:00.074127   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:00.573722   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:01.073970   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:01.574131   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:02.073146   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:02.574121   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:01.755545   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:01.770871   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:38:01.770946   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:38:01.808952   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:01.808981   49001 cri.go:89] found id: ""
	I0520 11:38:01.808990   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:38:01.809053   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.813167   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:38:01.813238   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:38:01.847015   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:01.847038   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:01.847043   49001 cri.go:89] found id: ""
	I0520 11:38:01.847050   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:38:01.847103   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.851340   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.855148   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:38:01.855192   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:38:01.888084   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:01.888103   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:38:01.888106   49001 cri.go:89] found id: ""
	I0520 11:38:01.888112   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:38:01.888159   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.892130   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.895885   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:38:01.895948   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:38:01.931943   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:01.931965   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:01.931970   49001 cri.go:89] found id: ""
	I0520 11:38:01.931979   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:38:01.932032   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.936969   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.940913   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:38:01.940994   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:38:01.975734   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:01.975755   49001 cri.go:89] found id: ""
	I0520 11:38:01.975762   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:38:01.975818   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:01.980258   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:38:01.980308   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:38:02.020165   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:02.020180   49001 cri.go:89] found id: ""
	I0520 11:38:02.020187   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:38:02.020231   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:02.024833   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:38:02.024898   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:38:02.073708   49001 cri.go:89] found id: ""
	I0520 11:38:02.073726   49001 logs.go:276] 0 containers: []
	W0520 11:38:02.073733   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:38:02.073748   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:38:02.073764   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:38:02.369408   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:38:02.369450   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:38:02.418311   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:38:02.418340   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:02.451908   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:38:02.451940   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:02.489458   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:38:02.489484   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:02.553002   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:38:02.553029   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:02.594301   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:38:02.594329   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:02.634342   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:38:02.634366   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:02.672411   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:38:02.672442   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:02.712054   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:38:02.712083   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:02.784091   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:38:02.784121   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:38:02.884619   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:38:02.884654   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:38:02.899557   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:38:02.899582   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:38:02.969559   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:38:02.969581   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:38:02.969595   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:38:03.005807   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:38:02Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:38:02Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:38:05.506332   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:05.520379   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:38:05.520453   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:38:05.554156   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:05.554181   49001 cri.go:89] found id: ""
	I0520 11:38:05.554190   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:38:05.554246   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.558685   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:38:05.558758   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:38:05.600568   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:05.600593   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:05.600598   49001 cri.go:89] found id: ""
	I0520 11:38:05.600607   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:38:05.600653   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.605799   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.611071   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:38:05.611136   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:38:05.650467   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:05.650492   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:38:05.650497   49001 cri.go:89] found id: ""
	I0520 11:38:05.650504   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:38:05.650551   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.655176   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.659748   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:38:05.659821   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:38:05.695623   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:05.695643   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:05.695647   49001 cri.go:89] found id: ""
	I0520 11:38:05.695654   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:38:05.695701   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.700108   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.704252   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:38:05.704322   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:38:05.739656   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:05.739680   49001 cri.go:89] found id: ""
	I0520 11:38:05.739688   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:38:05.739744   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.743870   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:38:05.743933   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:38:05.785143   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:05.785169   49001 cri.go:89] found id: ""
	I0520 11:38:05.785177   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:38:05.785236   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:05.789604   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:38:05.789663   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:38:05.828131   49001 cri.go:89] found id: ""
	I0520 11:38:05.828158   49001 logs.go:276] 0 containers: []
	W0520 11:38:05.828168   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:38:05.828186   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:38:05.828202   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:38:05.842034   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:38:05.842058   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:03.074134   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:03.574137   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:04.073720   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:04.574192   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:05.073829   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:05.573879   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:06.073515   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:06.574110   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:07.073907   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:07.573773   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:05.875216   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:38:05.877160   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:38:05.971422   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:38:05.971460   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:06.015904   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:38:06.015934   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:06.052145   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:38:06.052174   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:38:06.094286   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:38:06Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:38:06Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:38:06.094315   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:38:06.094331   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:06.178181   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:38:06.178213   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:38:06.227395   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:38:06.227423   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:38:06.291628   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:38:06.291649   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:38:06.291661   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:06.334195   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:38:06.334223   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:06.374025   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:38:06.374053   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:06.448074   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:38:06.448115   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:06.485428   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:38:06.485457   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:38:09.288254   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:09.304075   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:38:09.304135   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:38:09.343973   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:09.343997   49001 cri.go:89] found id: ""
	I0520 11:38:09.344004   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:38:09.344056   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.348468   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:38:09.348534   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:38:09.387379   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:09.387406   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:09.387411   49001 cri.go:89] found id: ""
	I0520 11:38:09.387421   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:38:09.387480   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.392029   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.396158   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:38:09.396206   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:38:09.432485   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:09.432503   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:38:09.432506   49001 cri.go:89] found id: ""
	I0520 11:38:09.432512   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:38:09.432558   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.436751   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.441079   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:38:09.441127   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:38:09.493660   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:09.493683   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:09.493689   49001 cri.go:89] found id: ""
	I0520 11:38:09.493697   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:38:09.493757   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.498119   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.502318   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:38:09.502374   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:38:09.544814   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:09.544836   49001 cri.go:89] found id: ""
	I0520 11:38:09.544844   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:38:09.544897   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.549900   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:38:09.549969   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:38:09.593250   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:09.593274   49001 cri.go:89] found id: ""
	I0520 11:38:09.593282   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:38:09.593336   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:09.597853   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:38:09.597910   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:38:09.638338   49001 cri.go:89] found id: ""
	I0520 11:38:09.638363   49001 logs.go:276] 0 containers: []
	W0520 11:38:09.638372   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:38:09.638391   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:38:09.638408   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:38:09.708117   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:38:09.708140   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:38:09.708152   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:38:09.754468   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:38:09.754500   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:09.819363   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:38:09.819406   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:38:09.857032   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:38:09Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:38:09Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:38:09.857060   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:38:09.857075   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:09.894161   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:38:09.894189   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:09.936809   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:38:09.936893   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:38:10.045129   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:38:10.045158   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:38:10.060573   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:38:10.060599   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:10.107107   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:38:10.107139   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:10.158773   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:38:10.158800   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:10.207133   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:38:10.207161   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:10.249976   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:38:10.250002   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:10.333791   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:38:10.333823   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:38:08.073446   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:08.573368   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:09.073630   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:09.574030   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:10.073838   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:10.573821   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:11.073290   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:11.573985   55177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:11.662312   55177 kubeadm.go:1107] duration metric: took 12.774738407s to wait for elevateKubeSystemPrivileges
	W0520 11:38:11.662350   55177 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 11:38:11.662358   55177 kubeadm.go:393] duration metric: took 24.6273899s to StartCluster
	I0520 11:38:11.662374   55177 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:38:11.662435   55177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:38:11.663629   55177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:38:11.663853   55177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 11:38:11.663853   55177 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:38:11.665528   55177 out.go:177] * Verifying Kubernetes components...
	I0520 11:38:11.663958   55177 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:38:11.666849   55177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:38:11.665561   55177 addons.go:69] Setting storage-provisioner=true in profile "no-preload-814599"
	I0520 11:38:11.666952   55177 addons.go:234] Setting addon storage-provisioner=true in "no-preload-814599"
	I0520 11:38:11.664050   55177 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:38:11.665569   55177 addons.go:69] Setting default-storageclass=true in profile "no-preload-814599"
	I0520 11:38:11.667114   55177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-814599"
	I0520 11:38:11.667002   55177 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:38:11.667546   55177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:38:11.667570   55177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:38:11.667573   55177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:38:11.667600   55177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:38:11.682729   55177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0520 11:38:11.682802   55177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I0520 11:38:11.683238   55177 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:38:11.683247   55177 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:38:11.683729   55177 main.go:141] libmachine: Using API Version  1
	I0520 11:38:11.683744   55177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:38:11.683886   55177 main.go:141] libmachine: Using API Version  1
	I0520 11:38:11.683913   55177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:38:11.684162   55177 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:38:11.684228   55177 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:38:11.684376   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:38:11.684794   55177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:38:11.684831   55177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:38:11.687840   55177 addons.go:234] Setting addon default-storageclass=true in "no-preload-814599"
	I0520 11:38:11.687882   55177 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:38:11.688229   55177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:38:11.688258   55177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:38:11.700574   55177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40975
	I0520 11:38:11.700996   55177 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:38:11.701527   55177 main.go:141] libmachine: Using API Version  1
	I0520 11:38:11.701555   55177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:38:11.701879   55177 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:38:11.702091   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:38:11.703051   55177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38261
	I0520 11:38:11.703377   55177 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:38:11.703745   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:38:11.705546   55177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:38:11.704248   55177 main.go:141] libmachine: Using API Version  1
	I0520 11:38:11.706748   55177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:38:11.706871   55177 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:38:11.706888   55177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:38:11.706907   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:38:11.707128   55177 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:38:11.707692   55177 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:38:11.707716   55177 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:38:11.709974   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:38:11.710410   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:38:11.710437   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:38:11.710587   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:38:11.710782   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:38:11.710941   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:38:11.711099   55177 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:38:11.722861   55177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0520 11:38:11.723238   55177 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:38:11.723627   55177 main.go:141] libmachine: Using API Version  1
	I0520 11:38:11.723641   55177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:38:11.723929   55177 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:38:11.724089   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:38:11.725813   55177 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:38:11.726117   55177 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:38:11.726135   55177 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:38:11.726152   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:38:11.728786   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:38:11.729324   55177 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:38:11.729347   55177 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:38:11.729487   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:38:11.729647   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:38:11.729802   55177 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:38:11.729968   55177 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:38:11.934267   55177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:38:11.934815   55177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 11:38:11.970939   55177 node_ready.go:35] waiting up to 6m0s for node "no-preload-814599" to be "Ready" ...
	I0520 11:38:11.986808   55177 node_ready.go:49] node "no-preload-814599" has status "Ready":"True"
	I0520 11:38:11.986838   55177 node_ready.go:38] duration metric: took 15.864359ms for node "no-preload-814599" to be "Ready" ...
	I0520 11:38:11.986849   55177 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:38:11.996124   55177 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.012434   55177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:38:12.023869   55177 pod_ready.go:92] pod "etcd-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:12.023896   55177 pod_ready.go:81] duration metric: took 27.747206ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.023910   55177 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.090125   55177 pod_ready.go:92] pod "kube-apiserver-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:12.090154   55177 pod_ready.go:81] duration metric: took 66.236019ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.090167   55177 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.129141   55177 pod_ready.go:92] pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:12.129171   55177 pod_ready.go:81] duration metric: took 38.996502ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.129185   55177 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:12.146582   55177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:38:12.702736   55177 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0520 11:38:13.186343   55177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.039726061s)
	I0520 11:38:13.186348   55177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.173874154s)
	I0520 11:38:13.186441   55177 main.go:141] libmachine: Making call to close driver server
	I0520 11:38:13.186406   55177 main.go:141] libmachine: Making call to close driver server
	I0520 11:38:13.186483   55177 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:38:13.186460   55177 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:38:13.186851   55177 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:38:13.186869   55177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:38:13.186857   55177 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:38:13.186880   55177 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:38:13.186890   55177 main.go:141] libmachine: Making call to close driver server
	I0520 11:38:13.186892   55177 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:38:13.186898   55177 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:38:13.186906   55177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:38:13.186939   55177 main.go:141] libmachine: Making call to close driver server
	I0520 11:38:13.186955   55177 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:38:13.187404   55177 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:38:13.187433   55177 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:38:13.187430   55177 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:38:13.187441   55177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:38:13.188481   55177 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:38:13.188499   55177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:38:13.216531   55177 main.go:141] libmachine: Making call to close driver server
	I0520 11:38:13.216550   55177 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:38:13.216869   55177 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:38:13.216900   55177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:38:13.218600   55177 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0520 11:38:13.217156   55177 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-814599" context rescaled to 1 replicas
	I0520 11:38:13.220482   55177 addons.go:505] duration metric: took 1.556523033s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0520 11:38:13.648055   55177 pod_ready.go:92] pod "kube-proxy-mqg49" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:13.648083   55177 pod_ready.go:81] duration metric: took 1.518890702s for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:13.648096   55177 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:13.672171   55177 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:13.672198   55177 pod_ready.go:81] duration metric: took 24.088741ms for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:13.672209   55177 pod_ready.go:38] duration metric: took 1.685347105s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:38:13.672227   55177 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:38:13.672288   55177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:13.707243   55177 api_server.go:72] duration metric: took 2.043355375s to wait for apiserver process to appear ...
	I0520 11:38:13.707273   55177 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:38:13.707306   55177 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:38:13.714628   55177 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:38:13.716833   55177 api_server.go:141] control plane version: v1.30.1
	I0520 11:38:13.716855   55177 api_server.go:131] duration metric: took 9.574901ms to wait for apiserver health ...
	I0520 11:38:13.716861   55177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:38:13.780609   55177 system_pods.go:59] 8 kube-system pods found
	I0520 11:38:13.780651   55177 system_pods.go:61] "coredns-7db6d8ff4d-2j2br" [2d6df494-bcbc-4dcd-9d36-c15cdd5bb905] Failed / Ready:PodFailed / ContainersReady:PodFailed
	I0520 11:38:13.780662   55177 system_pods.go:61] "coredns-7db6d8ff4d-tg5bl" [ec723a9c-933c-4d25-9b56-e39a78adbbb3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:38:13.780670   55177 system_pods.go:61] "etcd-no-preload-814599" [d0e5ca15-cc81-4dc2-9bc4-0e9b88c7b49d] Running
	I0520 11:38:13.780678   55177 system_pods.go:61] "kube-apiserver-no-preload-814599" [81f50182-5190-4168-b97a-f5a7a6348247] Running
	I0520 11:38:13.780683   55177 system_pods.go:61] "kube-controller-manager-no-preload-814599" [871504b1-6e24-498b-9563-241b7a7422d2] Running
	I0520 11:38:13.780688   55177 system_pods.go:61] "kube-proxy-mqg49" [0f3b3b30-9789-4a0b-b636-e6db16e1ce0d] Running
	I0520 11:38:13.780698   55177 system_pods.go:61] "kube-scheduler-no-preload-814599" [028942b5-5187-4b86-84ba-eba2cdb7a844] Running
	I0520 11:38:13.780705   55177 system_pods.go:61] "storage-provisioner" [84c706cd-0245-4afa-bb7b-4644bd2784bd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 11:38:13.780716   55177 system_pods.go:74] duration metric: took 63.849014ms to wait for pod list to return data ...
	I0520 11:38:13.780733   55177 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:38:13.974558   55177 default_sa.go:45] found service account: "default"
	I0520 11:38:13.974583   55177 default_sa.go:55] duration metric: took 193.840535ms for default service account to be created ...
	I0520 11:38:13.974592   55177 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:38:14.179931   55177 system_pods.go:86] 8 kube-system pods found
	I0520 11:38:14.179964   55177 system_pods.go:89] "coredns-7db6d8ff4d-2j2br" [2d6df494-bcbc-4dcd-9d36-c15cdd5bb905] Failed / Ready:PodFailed / ContainersReady:PodFailed
	I0520 11:38:14.179975   55177 system_pods.go:89] "coredns-7db6d8ff4d-tg5bl" [ec723a9c-933c-4d25-9b56-e39a78adbbb3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:38:14.179982   55177 system_pods.go:89] "etcd-no-preload-814599" [d0e5ca15-cc81-4dc2-9bc4-0e9b88c7b49d] Running
	I0520 11:38:14.179991   55177 system_pods.go:89] "kube-apiserver-no-preload-814599" [81f50182-5190-4168-b97a-f5a7a6348247] Running
	I0520 11:38:14.179997   55177 system_pods.go:89] "kube-controller-manager-no-preload-814599" [871504b1-6e24-498b-9563-241b7a7422d2] Running
	I0520 11:38:14.180004   55177 system_pods.go:89] "kube-proxy-mqg49" [0f3b3b30-9789-4a0b-b636-e6db16e1ce0d] Running
	I0520 11:38:14.180010   55177 system_pods.go:89] "kube-scheduler-no-preload-814599" [028942b5-5187-4b86-84ba-eba2cdb7a844] Running
	I0520 11:38:14.180022   55177 system_pods.go:89] "storage-provisioner" [84c706cd-0245-4afa-bb7b-4644bd2784bd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 11:38:14.180051   55177 retry.go:31] will retry after 205.867365ms: missing components: kube-dns
	I0520 11:38:14.394051   55177 system_pods.go:86] 7 kube-system pods found
	I0520 11:38:14.394095   55177 system_pods.go:89] "coredns-7db6d8ff4d-tg5bl" [ec723a9c-933c-4d25-9b56-e39a78adbbb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:38:14.394106   55177 system_pods.go:89] "etcd-no-preload-814599" [d0e5ca15-cc81-4dc2-9bc4-0e9b88c7b49d] Running
	I0520 11:38:14.394114   55177 system_pods.go:89] "kube-apiserver-no-preload-814599" [81f50182-5190-4168-b97a-f5a7a6348247] Running
	I0520 11:38:14.394123   55177 system_pods.go:89] "kube-controller-manager-no-preload-814599" [871504b1-6e24-498b-9563-241b7a7422d2] Running
	I0520 11:38:14.394129   55177 system_pods.go:89] "kube-proxy-mqg49" [0f3b3b30-9789-4a0b-b636-e6db16e1ce0d] Running
	I0520 11:38:14.394135   55177 system_pods.go:89] "kube-scheduler-no-preload-814599" [028942b5-5187-4b86-84ba-eba2cdb7a844] Running
	I0520 11:38:14.394140   55177 system_pods.go:89] "storage-provisioner" [84c706cd-0245-4afa-bb7b-4644bd2784bd] Running
	I0520 11:38:14.394150   55177 system_pods.go:126] duration metric: took 419.551262ms to wait for k8s-apps to be running ...
	I0520 11:38:14.394171   55177 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:38:14.394227   55177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:38:14.410212   55177 system_svc.go:56] duration metric: took 16.035102ms WaitForService to wait for kubelet
	I0520 11:38:14.410243   55177 kubeadm.go:576] duration metric: took 2.746359236s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:38:14.410267   55177 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:38:14.574827   55177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:38:14.574868   55177 node_conditions.go:123] node cpu capacity is 2
	I0520 11:38:14.574896   55177 node_conditions.go:105] duration metric: took 164.622789ms to run NodePressure ...
	I0520 11:38:14.574910   55177 start.go:240] waiting for startup goroutines ...
	I0520 11:38:14.574921   55177 start.go:245] waiting for cluster config update ...
	I0520 11:38:14.574936   55177 start.go:254] writing updated cluster config ...
	I0520 11:38:14.575308   55177 ssh_runner.go:195] Run: rm -f paused
	I0520 11:38:14.624181   55177 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:38:14.626334   55177 out.go:177] * Done! kubectl is now configured to use "no-preload-814599" cluster and "default" namespace by default
	I0520 11:38:13.140283   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:13.159225   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:38:13.159298   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:38:13.202120   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:13.202143   49001 cri.go:89] found id: ""
	I0520 11:38:13.202153   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:38:13.202208   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.207060   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:38:13.207131   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:38:13.247442   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:13.247469   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:13.247476   49001 cri.go:89] found id: ""
	I0520 11:38:13.247484   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:38:13.247544   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.253117   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.257685   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:38:13.257760   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:38:13.295709   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:13.295733   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:38:13.295738   49001 cri.go:89] found id: ""
	I0520 11:38:13.295746   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:38:13.295808   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.300226   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.305016   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:38:13.305089   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:38:13.351175   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:13.351199   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:13.351205   49001 cri.go:89] found id: ""
	I0520 11:38:13.351213   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:38:13.351265   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.355878   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.360181   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:38:13.360251   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:38:13.404715   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:13.404740   49001 cri.go:89] found id: ""
	I0520 11:38:13.404750   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:38:13.404807   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.409346   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:38:13.409406   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:38:13.445545   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:13.445570   49001 cri.go:89] found id: ""
	I0520 11:38:13.445578   49001 logs.go:276] 1 containers: [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:38:13.445639   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:13.450746   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:38:13.450816   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:38:13.491412   49001 cri.go:89] found id: ""
	I0520 11:38:13.491436   49001 logs.go:276] 0 containers: []
	W0520 11:38:13.491445   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:38:13.491458   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:38:13.491470   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:13.540230   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:38:13.540271   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:13.650145   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:38:13.650183   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:13.695626   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:38:13.695673   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:13.734914   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:38:13.734938   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:38:13.749048   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:38:13.749078   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:13.798905   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:38:13.798946   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:13.870877   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:38:13.870910   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:13.908078   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:38:13.908112   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:38:14.008196   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:38:14.008229   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:38:14.078476   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:38:14.078502   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:38:14.078522   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:38:14.116654   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:38:14Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:38:14Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:38:14.116677   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:38:14.116693   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:14.153652   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:38:14.153678   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:38:14.461830   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:38:14.461873   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:38:17.021128   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:17.035181   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:38:17.035265   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:38:17.072505   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:17.072538   49001 cri.go:89] found id: ""
	I0520 11:38:17.072547   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:38:17.072605   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.076869   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:38:17.076937   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:38:17.114526   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:17.114546   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:17.114549   49001 cri.go:89] found id: ""
	I0520 11:38:17.114556   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:38:17.114599   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.118935   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.123340   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:38:17.123400   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:38:17.162519   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:17.162542   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:38:17.162548   49001 cri.go:89] found id: ""
	I0520 11:38:17.162556   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:38:17.162609   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.167204   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.171366   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:38:17.171419   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:38:17.209885   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:17.209910   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:17.209916   49001 cri.go:89] found id: ""
	I0520 11:38:17.209924   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:38:17.209983   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.214346   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.218379   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:38:17.218454   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:38:17.256838   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:17.256861   49001 cri.go:89] found id: ""
	I0520 11:38:17.256869   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:38:17.256939   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.261405   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:38:17.261469   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:38:17.300854   49001 cri.go:89] found id: "8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb"
	I0520 11:38:17.300873   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:17.300879   49001 cri.go:89] found id: ""
	I0520 11:38:17.300888   49001 logs.go:276] 2 containers: [8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:38:17.300966   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.305449   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:17.310191   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:38:17.310235   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:38:17.347442   49001 cri.go:89] found id: ""
	I0520 11:38:17.347472   49001 logs.go:276] 0 containers: []
	W0520 11:38:17.347482   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:38:17.347499   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:38:17.347516   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:17.423945   49001 logs.go:123] Gathering logs for kube-controller-manager [8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb] ...
	I0520 11:38:17.423978   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb"
	I0520 11:38:17.462895   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:38:17.462917   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:17.501643   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:38:17.501668   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:17.571073   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:38:17.571105   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:17.619999   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:38:17.620025   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:38:17.667926   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:38:17Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:38:17Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:38:17.667958   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:38:17.667973   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:17.709635   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:38:17.709666   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:38:18.008685   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:38:18.008721   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:38:18.027721   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:38:18.027757   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:38:18.101182   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:38:18.101207   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:38:18.101222   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:18.140099   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:38:18.140136   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:38:18.194360   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:38:18.194388   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:38:18.294219   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:38:18.294254   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:18.335701   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:38:18.335725   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:20.886589   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:20.901878   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:38:20.901934   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:38:20.944886   49001 cri.go:89] found id: "4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:20.944915   49001 cri.go:89] found id: ""
	I0520 11:38:20.944939   49001 logs.go:276] 1 containers: [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a]
	I0520 11:38:20.945012   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:20.949447   49001 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:38:20.949516   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:38:20.983674   49001 cri.go:89] found id: "6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:20.983699   49001 cri.go:89] found id: "b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:20.983705   49001 cri.go:89] found id: ""
	I0520 11:38:20.983714   49001 logs.go:276] 2 containers: [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131]
	I0520 11:38:20.983770   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:20.988081   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:20.992237   49001 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:38:20.992297   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:38:21.032367   49001 cri.go:89] found id: "af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:21.032392   49001 cri.go:89] found id: "0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	I0520 11:38:21.032398   49001 cri.go:89] found id: ""
	I0520 11:38:21.032406   49001 logs.go:276] 2 containers: [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]
	I0520 11:38:21.032462   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.036703   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.040505   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:38:21.040560   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:38:21.077869   49001 cri.go:89] found id: "f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:21.077891   49001 cri.go:89] found id: "f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:21.077895   49001 cri.go:89] found id: ""
	I0520 11:38:21.077912   49001 logs.go:276] 2 containers: [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621]
	I0520 11:38:21.077971   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.082081   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.086005   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:38:21.086067   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:38:21.119906   49001 cri.go:89] found id: "e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:21.119931   49001 cri.go:89] found id: ""
	I0520 11:38:21.119941   49001 logs.go:276] 1 containers: [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca]
	I0520 11:38:21.120008   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.124462   49001 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:38:21.124525   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:38:21.168036   49001 cri.go:89] found id: "8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb"
	I0520 11:38:21.168057   49001 cri.go:89] found id: "49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:21.168061   49001 cri.go:89] found id: ""
	I0520 11:38:21.168067   49001 logs.go:276] 2 containers: [8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590]
	I0520 11:38:21.168130   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.173361   49001 ssh_runner.go:195] Run: which crictl
	I0520 11:38:21.178007   49001 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:38:21.178086   49001 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:38:21.218589   49001 cri.go:89] found id: ""
	I0520 11:38:21.218613   49001 logs.go:276] 0 containers: []
	W0520 11:38:21.218621   49001 logs.go:278] No container was found matching "kindnet"
	I0520 11:38:21.218636   49001 logs.go:123] Gathering logs for container status ...
	I0520 11:38:21.218646   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:38:21.259467   49001 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:38:21.259496   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:38:21.332306   49001 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:38:21.332323   49001 logs.go:123] Gathering logs for kube-scheduler [f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc] ...
	I0520 11:38:21.332334   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ceedeab01b12e52d5f5c324eac9ed212ca20ccf9d8f731db29f977b407c9cc"
	I0520 11:38:21.421544   49001 logs.go:123] Gathering logs for coredns [af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42] ...
	I0520 11:38:21.421592   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af15f4a32f235bba4d7ac9e2147008cf08c431115e1fc77ee947ec560e18de42"
	I0520 11:38:21.471622   49001 logs.go:123] Gathering logs for coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e] ...
	I0520 11:38:21.471649   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e"
	W0520 11:38:21.513493   49001 logs.go:130] failed coredns [0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f262b60eeed78e81948f5b39d10f499df421c068db96686f8fe3c819511107e": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-20T11:38:21Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	 output: 
	** stderr ** 
	time="2024-05-20T11:38:21Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038/coredns/0.log\": lstat /var/log/pods/kube-system_coredns-7db6d8ff4d-7mv4n_6946e322-e0ec-4b7f-8b47-feec984c3038: no such file or directory"
	
	** /stderr **
	I0520 11:38:21.513516   49001 logs.go:123] Gathering logs for kube-controller-manager [8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb] ...
	I0520 11:38:21.513533   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e29ef4b3a8f867a135633755cb1285621197a0553eb5c271a7fbc353c14a9bb"
	I0520 11:38:21.554897   49001 logs.go:123] Gathering logs for kube-controller-manager [49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590] ...
	I0520 11:38:21.554925   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49f8eac679ef96f6a671d24ebd6019ae6c93efcfff7c72a41bbd424b1b242590"
	I0520 11:38:21.589547   49001 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:38:21.589575   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:38:21.883651   49001 logs.go:123] Gathering logs for kube-apiserver [4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a] ...
	I0520 11:38:21.883690   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c05e305d78b2e9886efee3f8cc0e9c9149753a4c28ff3ae3e5815d8f2d25d8a"
	I0520 11:38:21.953715   49001 logs.go:123] Gathering logs for etcd [b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131] ...
	I0520 11:38:21.953746   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b212b5b8becc9abe946c260ccccf27c98cb9eb4d2f38f8877f3c71d9af04c131"
	I0520 11:38:22.000910   49001 logs.go:123] Gathering logs for etcd [6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004] ...
	I0520 11:38:22.000946   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6215354b7fed0bf94a7a6895eb72e53e296d16ac8debaa6ba72fc9326f876004"
	I0520 11:38:22.045360   49001 logs.go:123] Gathering logs for kube-scheduler [f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621] ...
	I0520 11:38:22.045411   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2698879c744b475b480722952f26333eeecfcce605840ca48490494fc5f6621"
	I0520 11:38:22.085436   49001 logs.go:123] Gathering logs for kube-proxy [e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca] ...
	I0520 11:38:22.085467   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e406d396fe2f2022560936dccd2c560f389dabb615b8596aaf46c3f93bc83dca"
	I0520 11:38:22.124882   49001 logs.go:123] Gathering logs for kubelet ...
	I0520 11:38:22.124914   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:38:22.223409   49001 logs.go:123] Gathering logs for dmesg ...
	I0520 11:38:22.223442   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:38:24.738469   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:24.752248   49001 kubeadm.go:591] duration metric: took 4m2.032144588s to restartPrimaryControlPlane
	W0520 11:38:24.752331   49001 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:38:24.752364   49001 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:38:30.503490   49001 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.751107158s)
	I0520 11:38:30.503563   49001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:38:30.518745   49001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:38:30.530630   49001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:38:30.541997   49001 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:38:30.542012   49001 kubeadm.go:156] found existing configuration files:
	
	I0520 11:38:30.542047   49001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:38:30.552537   49001 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:38:30.552578   49001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:38:30.563869   49001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:38:30.590860   49001 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:38:30.590917   49001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:38:30.601141   49001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:38:30.610476   49001 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:38:30.610526   49001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:38:30.619912   49001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:38:30.629267   49001 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:38:30.629324   49001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:38:30.638628   49001 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:38:30.822466   49001 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:38:38.725380   49001 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 11:38:38.725431   49001 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:38:38.725509   49001 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:38:38.725657   49001 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:38:38.725803   49001 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:38:38.725901   49001 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:38:38.727789   49001 out.go:204]   - Generating certificates and keys ...
	I0520 11:38:38.727871   49001 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:38:38.727933   49001 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:38:38.728050   49001 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:38:38.728146   49001 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:38:38.728260   49001 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:38:38.728363   49001 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:38:38.728443   49001 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:38:38.728523   49001 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:38:38.728618   49001 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:38:38.728728   49001 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:38:38.728771   49001 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:38:38.728826   49001 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:38:38.728872   49001 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:38:38.728965   49001 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 11:38:38.729016   49001 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:38:38.729070   49001 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:38:38.729170   49001 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:38:38.729295   49001 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:38:38.729384   49001 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:38:38.730955   49001 out.go:204]   - Booting up control plane ...
	I0520 11:38:38.731059   49001 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:38:38.731146   49001 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:38:38.731229   49001 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:38:38.731361   49001 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:38:38.731457   49001 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:38:38.731498   49001 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:38:38.731646   49001 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 11:38:38.731753   49001 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 11:38:38.731837   49001 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 503.840079ms
	I0520 11:38:38.731899   49001 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 11:38:38.731959   49001 kubeadm.go:309] [api-check] The API server is healthy after 5.002265792s
	I0520 11:38:38.732104   49001 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 11:38:38.732262   49001 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 11:38:38.732337   49001 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 11:38:38.732588   49001 kubeadm.go:309] [mark-control-plane] Marking the node pause-120159 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 11:38:38.732673   49001 kubeadm.go:309] [bootstrap-token] Using token: 6sxsyt.uubjd8d69g0boyj1
	I0520 11:38:38.734835   49001 out.go:204]   - Configuring RBAC rules ...
	I0520 11:38:38.734934   49001 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 11:38:38.735038   49001 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 11:38:38.735193   49001 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 11:38:38.735339   49001 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 11:38:38.735476   49001 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 11:38:38.735580   49001 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 11:38:38.735715   49001 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 11:38:38.735784   49001 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 11:38:38.735867   49001 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 11:38:38.735880   49001 kubeadm.go:309] 
	I0520 11:38:38.735965   49001 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 11:38:38.735973   49001 kubeadm.go:309] 
	I0520 11:38:38.736073   49001 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 11:38:38.736086   49001 kubeadm.go:309] 
	I0520 11:38:38.736120   49001 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 11:38:38.736175   49001 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 11:38:38.736251   49001 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 11:38:38.736265   49001 kubeadm.go:309] 
	I0520 11:38:38.736340   49001 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 11:38:38.736350   49001 kubeadm.go:309] 
	I0520 11:38:38.736424   49001 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 11:38:38.736438   49001 kubeadm.go:309] 
	I0520 11:38:38.736532   49001 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 11:38:38.736628   49001 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 11:38:38.736720   49001 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 11:38:38.736727   49001 kubeadm.go:309] 
	I0520 11:38:38.736844   49001 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 11:38:38.736913   49001 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 11:38:38.736919   49001 kubeadm.go:309] 
	I0520 11:38:38.737027   49001 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6sxsyt.uubjd8d69g0boyj1 \
	I0520 11:38:38.737161   49001 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 11:38:38.737186   49001 kubeadm.go:309] 	--control-plane 
	I0520 11:38:38.737191   49001 kubeadm.go:309] 
	I0520 11:38:38.737278   49001 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 11:38:38.737286   49001 kubeadm.go:309] 
	I0520 11:38:38.737351   49001 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6sxsyt.uubjd8d69g0boyj1 \
	I0520 11:38:38.737451   49001 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 11:38:38.737461   49001 cni.go:84] Creating CNI manager for ""
	I0520 11:38:38.737467   49001 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:38:38.738864   49001 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:38:38.740207   49001 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:38:38.752858   49001 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:38:38.778813   49001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:38:38.778886   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:38.778915   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-120159 minikube.k8s.io/updated_at=2024_05_20T11_38_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=pause-120159 minikube.k8s.io/primary=true
	I0520 11:38:38.911412   49001 ops.go:34] apiserver oom_adj: -16
	I0520 11:38:38.911459   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:39.411546   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:39.911560   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:40.411480   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:40.912372   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:41.411567   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:41.912007   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:42.411473   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:42.911855   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:43.412327   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:43.911823   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:44.411514   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:44.911954   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:45.412331   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:45.911784   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:46.411858   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:46.912159   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:47.412449   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:47.911600   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:48.412431   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:48.911489   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:49.411661   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:49.911564   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:50.412078   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:50.911705   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:51.411618   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:51.912458   49001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:38:52.051101   49001 kubeadm.go:1107] duration metric: took 13.272266313s to wait for elevateKubeSystemPrivileges
	W0520 11:38:52.051148   49001 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 11:38:52.051159   49001 kubeadm.go:393] duration metric: took 4m29.435831598s to StartCluster
	I0520 11:38:52.051186   49001 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:38:52.051291   49001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:38:52.052252   49001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:38:52.052473   49001 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:38:52.053805   49001 out.go:177] * Verifying Kubernetes components...
	I0520 11:38:52.052570   49001 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:38:52.052699   49001 config.go:182] Loaded profile config "pause-120159": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:38:52.055113   49001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:38:52.056657   49001 out.go:177] * Enabled addons: 
	I0520 11:38:52.058676   49001 addons.go:505] duration metric: took 6.103036ms for enable addons: enabled=[]
	I0520 11:38:52.241611   49001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:38:52.303879   49001 node_ready.go:35] waiting up to 6m0s for node "pause-120159" to be "Ready" ...
	I0520 11:38:52.313545   49001 node_ready.go:49] node "pause-120159" has status "Ready":"True"
	I0520 11:38:52.313565   49001 node_ready.go:38] duration metric: took 9.653609ms for node "pause-120159" to be "Ready" ...
	I0520 11:38:52.313573   49001 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:38:52.321530   49001 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-k4bcq" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.344982   49001 pod_ready.go:92] pod "coredns-7db6d8ff4d-k4bcq" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:53.345013   49001 pod_ready.go:81] duration metric: took 1.023459579s for pod "coredns-7db6d8ff4d-k4bcq" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.345022   49001 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n2sf5" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.361942   49001 pod_ready.go:92] pod "coredns-7db6d8ff4d-n2sf5" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:53.361966   49001 pod_ready.go:81] duration metric: took 16.937707ms for pod "coredns-7db6d8ff4d-n2sf5" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.361978   49001 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.375098   49001 pod_ready.go:92] pod "etcd-pause-120159" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:53.375124   49001 pod_ready.go:81] duration metric: took 13.139918ms for pod "etcd-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.375136   49001 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.380057   49001 pod_ready.go:92] pod "kube-apiserver-pause-120159" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:53.380074   49001 pod_ready.go:81] duration metric: took 4.930933ms for pod "kube-apiserver-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.380083   49001 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.507519   49001 pod_ready.go:92] pod "kube-controller-manager-pause-120159" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:53.507541   49001 pod_ready.go:81] duration metric: took 127.451914ms for pod "kube-controller-manager-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.507550   49001 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-28q5d" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.907081   49001 pod_ready.go:92] pod "kube-proxy-28q5d" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:53.907104   49001 pod_ready.go:81] duration metric: took 399.548035ms for pod "kube-proxy-28q5d" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:53.907113   49001 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:54.307668   49001 pod_ready.go:92] pod "kube-scheduler-pause-120159" in "kube-system" namespace has status "Ready":"True"
	I0520 11:38:54.307689   49001 pod_ready.go:81] duration metric: took 400.570058ms for pod "kube-scheduler-pause-120159" in "kube-system" namespace to be "Ready" ...
	I0520 11:38:54.307696   49001 pod_ready.go:38] duration metric: took 1.994114326s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:38:54.307709   49001 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:38:54.307762   49001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:38:54.323119   49001 api_server.go:72] duration metric: took 2.270617012s to wait for apiserver process to appear ...
	I0520 11:38:54.323138   49001 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:38:54.323151   49001 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0520 11:38:54.328102   49001 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0520 11:38:54.329086   49001 api_server.go:141] control plane version: v1.30.1
	I0520 11:38:54.329108   49001 api_server.go:131] duration metric: took 5.96558ms to wait for apiserver health ...
	I0520 11:38:54.329115   49001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:38:54.509891   49001 system_pods.go:59] 7 kube-system pods found
	I0520 11:38:54.509920   49001 system_pods.go:61] "coredns-7db6d8ff4d-k4bcq" [60a35b25-1142-4747-9d9e-8b681f1b0a4f] Running
	I0520 11:38:54.509928   49001 system_pods.go:61] "coredns-7db6d8ff4d-n2sf5" [5aa9b938-1a9f-4278-80c6-ede8bffac8c5] Running
	I0520 11:38:54.509933   49001 system_pods.go:61] "etcd-pause-120159" [756cc6fb-de0b-4a0a-9af3-ea55481a963a] Running
	I0520 11:38:54.509938   49001 system_pods.go:61] "kube-apiserver-pause-120159" [f5e46550-70fa-4227-81c0-c48f3de2633e] Running
	I0520 11:38:54.509946   49001 system_pods.go:61] "kube-controller-manager-pause-120159" [3cf37af2-b9c6-4cbe-8a2e-6faab220311a] Running
	I0520 11:38:54.509951   49001 system_pods.go:61] "kube-proxy-28q5d" [eb2d6eed-e833-4f59-b070-efe984dd2d32] Running
	I0520 11:38:54.509957   49001 system_pods.go:61] "kube-scheduler-pause-120159" [c1329c69-f100-4b6d-bac9-207b6ed018e1] Running
	I0520 11:38:54.509963   49001 system_pods.go:74] duration metric: took 180.842613ms to wait for pod list to return data ...
	I0520 11:38:54.509973   49001 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:38:54.706824   49001 default_sa.go:45] found service account: "default"
	I0520 11:38:54.706848   49001 default_sa.go:55] duration metric: took 196.867998ms for default service account to be created ...
	I0520 11:38:54.706857   49001 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:38:54.910080   49001 system_pods.go:86] 7 kube-system pods found
	I0520 11:38:54.910110   49001 system_pods.go:89] "coredns-7db6d8ff4d-k4bcq" [60a35b25-1142-4747-9d9e-8b681f1b0a4f] Running
	I0520 11:38:54.910118   49001 system_pods.go:89] "coredns-7db6d8ff4d-n2sf5" [5aa9b938-1a9f-4278-80c6-ede8bffac8c5] Running
	I0520 11:38:54.910126   49001 system_pods.go:89] "etcd-pause-120159" [756cc6fb-de0b-4a0a-9af3-ea55481a963a] Running
	I0520 11:38:54.910131   49001 system_pods.go:89] "kube-apiserver-pause-120159" [f5e46550-70fa-4227-81c0-c48f3de2633e] Running
	I0520 11:38:54.910136   49001 system_pods.go:89] "kube-controller-manager-pause-120159" [3cf37af2-b9c6-4cbe-8a2e-6faab220311a] Running
	I0520 11:38:54.910142   49001 system_pods.go:89] "kube-proxy-28q5d" [eb2d6eed-e833-4f59-b070-efe984dd2d32] Running
	I0520 11:38:54.910148   49001 system_pods.go:89] "kube-scheduler-pause-120159" [c1329c69-f100-4b6d-bac9-207b6ed018e1] Running
	I0520 11:38:54.910156   49001 system_pods.go:126] duration metric: took 203.292455ms to wait for k8s-apps to be running ...
	I0520 11:38:54.910169   49001 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:38:54.910218   49001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:38:54.925529   49001 system_svc.go:56] duration metric: took 15.353979ms WaitForService to wait for kubelet
	I0520 11:38:54.925556   49001 kubeadm.go:576] duration metric: took 2.873055164s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:38:54.925577   49001 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:38:55.108320   49001 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:38:55.108351   49001 node_conditions.go:123] node cpu capacity is 2
	I0520 11:38:55.108363   49001 node_conditions.go:105] duration metric: took 182.781309ms to run NodePressure ...
	I0520 11:38:55.108373   49001 start.go:240] waiting for startup goroutines ...
	I0520 11:38:55.108380   49001 start.go:245] waiting for cluster config update ...
	I0520 11:38:55.108386   49001 start.go:254] writing updated cluster config ...
	I0520 11:38:55.108713   49001 ssh_runner.go:195] Run: rm -f paused
	I0520 11:38:55.157080   49001 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:38:55.159084   49001 out.go:177] * Done! kubectl is now configured to use "pause-120159" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.141924214Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716205138141903610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=529eaa00-6a3c-48fa-9ad4-b64cf3c173ba name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.142308139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21041f19-5b00-45c6-9ad9-f07d2b905464 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.142373489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21041f19-5b00-45c6-9ad9-f07d2b905464 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.142662019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e335c47534890e23571d7440b24e963fd620030d54cb4db5400afecf1addf93d,PodSandboxId:89a544ecbe8e4fdea02b26bfcc490d7f140bf261bfcc3f699362dcb5b6860699,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132834128897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2sf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa9b938-1a9f-4278-80c6-ede8bffac8c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2f6931db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec137482565e4512e0f765fd430fb83485095a578f3a824d15fbeb64ce9073c2,PodSandboxId:592a5391281f429bbe833da693c5f91b247269b73a890ca8380df9131ad2d357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132769253556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4bcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 60a35b25-1142-4747-9d9e-8b681f1b0a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 14d424e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d33a1d4f982352ad0352681804eb3ac481c2455abe6657e1d92c0f7854824,PodSandboxId:5d41ec5fcf693950d9978fffac20811f459038bcb7b7f5c91fe25de9073f4d97,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,Cr
eatedAt:1716205132355699439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28q5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb2d6eed-e833-4f59-b070-efe984dd2d32,},Annotations:map[string]string{io.kubernetes.container.hash: be94c29a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f0d694e4ee089a2f98e661ffc57342adb95ebabb63e3ccb504e8bd1a7c998d,PodSandboxId:aacce476a8d206f9a2dc18415c4ac149ca23b39b37fe1c8588d235f0b21ae174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205112917773630,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf36b6031f0be4f966b49f3302053ddc,},Annotations:map[string]string{io.kubernetes.container.hash: 73bb4454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be213ff811e3c54c1dfe90935bbc09061eb48e357e42756b906c66f098f8cb0d,PodSandboxId:c13f04966d6e69cecca9b7bd1524b297d367c8a5c8351e33784c25ceb3820b7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205112900691806,Labels:map
[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3625f506094e6e70bf9a09a5ff7fab,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5629e67d15fc9fad15493ce58ef39d4eb148508d51681bfa344c9a2c444a08,PodSandboxId:890cc247570bf71be06e5555d0c05fc3aad5bc31c5eb51a3877e53411bcd9e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205112825864423,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1abdb2a9421c1c51820204e59e8987bc,},Annotations:map[string]string{io.kubernetes.container.hash: cfad84f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f15f10b6f508dfb823ceb93c39c1a03b4b411112c84a1efb2ed7dc107a935cb,PodSandboxId:148e0732394f4cb2b73f9709eb6156799b6d8650504c4d8b109198433e034b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205112802952114,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7220233a7b29136ea4685855ec2b5f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21041f19-5b00-45c6-9ad9-f07d2b905464 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.173338007Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=35c6316e-cf3c-491e-a6b4-b0e69e891ebe name=/runtime.v1.ImageService/ListImages
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.173856952Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,RepoTags:[registry.k8s.io/kube-apiserver:v1.30.1],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c],Size_:117601759,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52 registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027],Size_:112170310,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Ima
ge{Id:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,RepoTags:[registry.k8s.io/kube-scheduler:v1.30.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036 registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973],Size_:63026504,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,RepoTags:[registry.k8s.io/kube-proxy:v1.30.1],RepoDigests:[registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c],Size_:85933465,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de53
0d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,RepoTags:[docker.io/kindest/kindnetd:v20240202-8f1494ea],RepoDigests:[docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988 docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac],Size_:65291810,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=35c6316e-cf3c-491e-a6b4-b0e69e891ebe name=/runtime.v1.ImageService/ListImages
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.188667340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49da06c4-41e6-48df-8e14-5467c31ed0a4 name=/runtime.v1.RuntimeService/Version
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.188756582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49da06c4-41e6-48df-8e14-5467c31ed0a4 name=/runtime.v1.RuntimeService/Version
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.190956879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5723e3d2-1a67-408c-b59f-1bc64db7f2b0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.191319251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716205138191298416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5723e3d2-1a67-408c-b59f-1bc64db7f2b0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.192130054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90e86609-3327-4d1c-8bf4-ac960b40abdc name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.192205598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90e86609-3327-4d1c-8bf4-ac960b40abdc name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.192426121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e335c47534890e23571d7440b24e963fd620030d54cb4db5400afecf1addf93d,PodSandboxId:89a544ecbe8e4fdea02b26bfcc490d7f140bf261bfcc3f699362dcb5b6860699,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132834128897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2sf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa9b938-1a9f-4278-80c6-ede8bffac8c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2f6931db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec137482565e4512e0f765fd430fb83485095a578f3a824d15fbeb64ce9073c2,PodSandboxId:592a5391281f429bbe833da693c5f91b247269b73a890ca8380df9131ad2d357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132769253556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4bcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 60a35b25-1142-4747-9d9e-8b681f1b0a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 14d424e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d33a1d4f982352ad0352681804eb3ac481c2455abe6657e1d92c0f7854824,PodSandboxId:5d41ec5fcf693950d9978fffac20811f459038bcb7b7f5c91fe25de9073f4d97,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,Cr
eatedAt:1716205132355699439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28q5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb2d6eed-e833-4f59-b070-efe984dd2d32,},Annotations:map[string]string{io.kubernetes.container.hash: be94c29a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f0d694e4ee089a2f98e661ffc57342adb95ebabb63e3ccb504e8bd1a7c998d,PodSandboxId:aacce476a8d206f9a2dc18415c4ac149ca23b39b37fe1c8588d235f0b21ae174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205112917773630,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf36b6031f0be4f966b49f3302053ddc,},Annotations:map[string]string{io.kubernetes.container.hash: 73bb4454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be213ff811e3c54c1dfe90935bbc09061eb48e357e42756b906c66f098f8cb0d,PodSandboxId:c13f04966d6e69cecca9b7bd1524b297d367c8a5c8351e33784c25ceb3820b7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205112900691806,Labels:map
[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3625f506094e6e70bf9a09a5ff7fab,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5629e67d15fc9fad15493ce58ef39d4eb148508d51681bfa344c9a2c444a08,PodSandboxId:890cc247570bf71be06e5555d0c05fc3aad5bc31c5eb51a3877e53411bcd9e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205112825864423,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1abdb2a9421c1c51820204e59e8987bc,},Annotations:map[string]string{io.kubernetes.container.hash: cfad84f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f15f10b6f508dfb823ceb93c39c1a03b4b411112c84a1efb2ed7dc107a935cb,PodSandboxId:148e0732394f4cb2b73f9709eb6156799b6d8650504c4d8b109198433e034b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205112802952114,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7220233a7b29136ea4685855ec2b5f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90e86609-3327-4d1c-8bf4-ac960b40abdc name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.215007985Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=cde5120b-8ad1-4f1d-9bf3-fc8182a95323 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.215189860Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:89a544ecbe8e4fdea02b26bfcc490d7f140bf261bfcc3f699362dcb5b6860699,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-n2sf5,Uid:5aa9b938-1a9f-4278-80c6-ede8bffac8c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205132444421995,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2sf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa9b938-1a9f-4278-80c6-ede8bffac8c5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T11:38:52.135661657Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:592a5391281f429bbe833da693c5f91b247269b73a890ca8380df9131ad2d357,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-k4bcq,Uid:60a35b25-1142-4747-9d9e-8b681f1b0a4f,Namespace:kube-system
,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205132376019809,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4bcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60a35b25-1142-4747-9d9e-8b681f1b0a4f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T11:38:52.065792534Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d41ec5fcf693950d9978fffac20811f459038bcb7b7f5c91fe25de9073f4d97,Metadata:&PodSandboxMetadata{Name:kube-proxy-28q5d,Uid:eb2d6eed-e833-4f59-b070-efe984dd2d32,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205132230960302,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-28q5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb2d6eed-e833-4f59-b070-efe984dd2d32,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[st
ring]string{kubernetes.io/config.seen: 2024-05-20T11:38:51.921100727Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:148e0732394f4cb2b73f9709eb6156799b6d8650504c4d8b109198433e034b25,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-120159,Uid:7220233a7b29136ea4685855ec2b5f9e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205112642983147,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7220233a7b29136ea4685855ec2b5f9e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7220233a7b29136ea4685855ec2b5f9e,kubernetes.io/config.seen: 2024-05-20T11:38:32.196722505Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c13f04966d6e69cecca9b7bd1524b297d367c8a5c8351e33784c25ceb3820b7b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-120159,Uid:9a3625f506094e6e70bf9a09a
5ff7fab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205112642049696,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3625f506094e6e70bf9a09a5ff7fab,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9a3625f506094e6e70bf9a09a5ff7fab,kubernetes.io/config.seen: 2024-05-20T11:38:32.196728665Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aacce476a8d206f9a2dc18415c4ac149ca23b39b37fe1c8588d235f0b21ae174,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-120159,Uid:cf36b6031f0be4f966b49f3302053ddc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205112639929593,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: cf36b6031f0be4f966b49f3302053ddc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.7:8443,kubernetes.io/config.hash: cf36b6031f0be4f966b49f3302053ddc,kubernetes.io/config.seen: 2024-05-20T11:38:32.196727310Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:890cc247570bf71be06e5555d0c05fc3aad5bc31c5eb51a3877e53411bcd9e43,Metadata:&PodSandboxMetadata{Name:etcd-pause-120159,Uid:1abdb2a9421c1c51820204e59e8987bc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205112630841139,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1abdb2a9421c1c51820204e59e8987bc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.7:2379,kubernetes.io/config.hash: 1abdb2a9421c1c51820204e59e8987bc,kubernetes.io/config.
seen: 2024-05-20T11:38:32.196725945Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=cde5120b-8ad1-4f1d-9bf3-fc8182a95323 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.216116205Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42f3d279-c9e5-47b3-8eb7-f2218ba7c02d name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.216191862Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42f3d279-c9e5-47b3-8eb7-f2218ba7c02d name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.216358395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e335c47534890e23571d7440b24e963fd620030d54cb4db5400afecf1addf93d,PodSandboxId:89a544ecbe8e4fdea02b26bfcc490d7f140bf261bfcc3f699362dcb5b6860699,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132834128897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2sf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa9b938-1a9f-4278-80c6-ede8bffac8c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2f6931db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec137482565e4512e0f765fd430fb83485095a578f3a824d15fbeb64ce9073c2,PodSandboxId:592a5391281f429bbe833da693c5f91b247269b73a890ca8380df9131ad2d357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132769253556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4bcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 60a35b25-1142-4747-9d9e-8b681f1b0a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 14d424e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d33a1d4f982352ad0352681804eb3ac481c2455abe6657e1d92c0f7854824,PodSandboxId:5d41ec5fcf693950d9978fffac20811f459038bcb7b7f5c91fe25de9073f4d97,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,Cr
eatedAt:1716205132355699439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28q5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb2d6eed-e833-4f59-b070-efe984dd2d32,},Annotations:map[string]string{io.kubernetes.container.hash: be94c29a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f0d694e4ee089a2f98e661ffc57342adb95ebabb63e3ccb504e8bd1a7c998d,PodSandboxId:aacce476a8d206f9a2dc18415c4ac149ca23b39b37fe1c8588d235f0b21ae174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205112917773630,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf36b6031f0be4f966b49f3302053ddc,},Annotations:map[string]string{io.kubernetes.container.hash: 73bb4454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be213ff811e3c54c1dfe90935bbc09061eb48e357e42756b906c66f098f8cb0d,PodSandboxId:c13f04966d6e69cecca9b7bd1524b297d367c8a5c8351e33784c25ceb3820b7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205112900691806,Labels:map
[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3625f506094e6e70bf9a09a5ff7fab,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5629e67d15fc9fad15493ce58ef39d4eb148508d51681bfa344c9a2c444a08,PodSandboxId:890cc247570bf71be06e5555d0c05fc3aad5bc31c5eb51a3877e53411bcd9e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205112825864423,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1abdb2a9421c1c51820204e59e8987bc,},Annotations:map[string]string{io.kubernetes.container.hash: cfad84f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f15f10b6f508dfb823ceb93c39c1a03b4b411112c84a1efb2ed7dc107a935cb,PodSandboxId:148e0732394f4cb2b73f9709eb6156799b6d8650504c4d8b109198433e034b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205112802952114,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7220233a7b29136ea4685855ec2b5f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42f3d279-c9e5-47b3-8eb7-f2218ba7c02d name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.234155763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45c62ca4-b6a8-4890-b0c8-160e85c95b26 name=/runtime.v1.RuntimeService/Version
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.234226404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45c62ca4-b6a8-4890-b0c8-160e85c95b26 name=/runtime.v1.RuntimeService/Version
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.235219911Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=711696ec-9e64-41cb-876f-fca13387e3a7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.235688185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716205138235662370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=711696ec-9e64-41cb-876f-fca13387e3a7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.236138081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32556f78-7402-4c51-b8b1-b4ffe5371419 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.236215743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32556f78-7402-4c51-b8b1-b4ffe5371419 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:38:58 pause-120159 crio[3036]: time="2024-05-20 11:38:58.236414774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e335c47534890e23571d7440b24e963fd620030d54cb4db5400afecf1addf93d,PodSandboxId:89a544ecbe8e4fdea02b26bfcc490d7f140bf261bfcc3f699362dcb5b6860699,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132834128897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n2sf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa9b938-1a9f-4278-80c6-ede8bffac8c5,},Annotations:map[string]string{io.kubernetes.container.hash: 2f6931db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec137482565e4512e0f765fd430fb83485095a578f3a824d15fbeb64ce9073c2,PodSandboxId:592a5391281f429bbe833da693c5f91b247269b73a890ca8380df9131ad2d357,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205132769253556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k4bcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 60a35b25-1142-4747-9d9e-8b681f1b0a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 14d424e7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:421d33a1d4f982352ad0352681804eb3ac481c2455abe6657e1d92c0f7854824,PodSandboxId:5d41ec5fcf693950d9978fffac20811f459038bcb7b7f5c91fe25de9073f4d97,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,Cr
eatedAt:1716205132355699439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28q5d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb2d6eed-e833-4f59-b070-efe984dd2d32,},Annotations:map[string]string{io.kubernetes.container.hash: be94c29a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f0d694e4ee089a2f98e661ffc57342adb95ebabb63e3ccb504e8bd1a7c998d,PodSandboxId:aacce476a8d206f9a2dc18415c4ac149ca23b39b37fe1c8588d235f0b21ae174,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205112917773630,L
abels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf36b6031f0be4f966b49f3302053ddc,},Annotations:map[string]string{io.kubernetes.container.hash: 73bb4454,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be213ff811e3c54c1dfe90935bbc09061eb48e357e42756b906c66f098f8cb0d,PodSandboxId:c13f04966d6e69cecca9b7bd1524b297d367c8a5c8351e33784c25ceb3820b7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205112900691806,Labels:map
[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3625f506094e6e70bf9a09a5ff7fab,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5629e67d15fc9fad15493ce58ef39d4eb148508d51681bfa344c9a2c444a08,PodSandboxId:890cc247570bf71be06e5555d0c05fc3aad5bc31c5eb51a3877e53411bcd9e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205112825864423,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1abdb2a9421c1c51820204e59e8987bc,},Annotations:map[string]string{io.kubernetes.container.hash: cfad84f7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f15f10b6f508dfb823ceb93c39c1a03b4b411112c84a1efb2ed7dc107a935cb,PodSandboxId:148e0732394f4cb2b73f9709eb6156799b6d8650504c4d8b109198433e034b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205112802952114,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-120159,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7220233a7b29136ea4685855ec2b5f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32556f78-7402-4c51-b8b1-b4ffe5371419 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e335c47534890       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   5 seconds ago       Running             coredns                   0                   89a544ecbe8e4       coredns-7db6d8ff4d-n2sf5
	ec137482565e4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   5 seconds ago       Running             coredns                   0                   592a5391281f4       coredns-7db6d8ff4d-k4bcq
	421d33a1d4f98       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   5 seconds ago       Running             kube-proxy                0                   5d41ec5fcf693       kube-proxy-28q5d
	d9f0d694e4ee0       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   25 seconds ago      Running             kube-apiserver            1                   aacce476a8d20       kube-apiserver-pause-120159
	be213ff811e3c       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   25 seconds ago      Running             kube-controller-manager   7                   c13f04966d6e6       kube-controller-manager-pause-120159
	df5629e67d15f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   25 seconds ago      Running             etcd                      3                   890cc247570bf       etcd-pause-120159
	0f15f10b6f508       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   25 seconds ago      Running             kube-scheduler            3                   148e0732394f4       kube-scheduler-pause-120159
	
	
	==> coredns [e335c47534890e23571d7440b24e963fd620030d54cb4db5400afecf1addf93d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ec137482565e4512e0f765fd430fb83485095a578f3a824d15fbeb64ce9073c2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               pause-120159
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-120159
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=pause-120159
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T11_38_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:38:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-120159
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:38:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:38:58 +0000   Mon, 20 May 2024 11:38:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:38:58 +0000   Mon, 20 May 2024 11:38:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:38:58 +0000   Mon, 20 May 2024 11:38:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:38:58 +0000   Mon, 20 May 2024 11:38:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.7
	  Hostname:    pause-120159
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 7158a385ce02475b8afc41cda7c21255
	  System UUID:                7158a385-ce02-475b-8afc-41cda7c21255
	  Boot ID:                    30dd72df-d2bf-4cec-9531-c150555ff449
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-k4bcq                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6s
	  kube-system                 coredns-7db6d8ff4d-n2sf5                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6s
	  kube-system                 etcd-pause-120159                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         22s
	  kube-system                 kube-apiserver-pause-120159             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 kube-controller-manager-pause-120159    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 kube-proxy-28q5d                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kube-scheduler-pause-120159             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (12%!)(MISSING)  340Mi (17%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 5s    kube-proxy       
	  Normal  Starting                 20s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s   kubelet          Node pause-120159 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s   kubelet          Node pause-120159 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s   kubelet          Node pause-120159 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7s    node-controller  Node pause-120159 event: Registered Node pause-120159 in Controller
	
	
	==> dmesg <==
	[  +0.151039] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.309292] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.656743] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.076994] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.305407] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +1.122133] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.460122] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.093543] kauditd_printk_skb: 30 callbacks suppressed
	[ +14.528876] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.009833] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +6.969892] kauditd_printk_skb: 90 callbacks suppressed
	[  +6.455929] systemd-fstab-generator[2544]: Ignoring "noauto" option for root device
	[  +0.235337] systemd-fstab-generator[2637]: Ignoring "noauto" option for root device
	[  +0.486265] systemd-fstab-generator[2800]: Ignoring "noauto" option for root device
	[  +0.193781] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +0.432032] systemd-fstab-generator[2899]: Ignoring "noauto" option for root device
	[May20 11:34] systemd-fstab-generator[3159]: Ignoring "noauto" option for root device
	[  +0.102705] kauditd_printk_skb: 171 callbacks suppressed
	[  +2.015709] systemd-fstab-generator[3283]: Ignoring "noauto" option for root device
	[ +12.433356] kauditd_printk_skb: 74 callbacks suppressed
	[May20 11:38] systemd-fstab-generator[10569]: Ignoring "noauto" option for root device
	[  +6.063610] systemd-fstab-generator[10889]: Ignoring "noauto" option for root device
	[  +0.089784] kauditd_printk_skb: 75 callbacks suppressed
	[ +14.254898] systemd-fstab-generator[11114]: Ignoring "noauto" option for root device
	[  +0.102164] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [df5629e67d15fc9fad15493ce58ef39d4eb148508d51681bfa344c9a2c444a08] <==
	{"level":"info","ts":"2024-05-20T11:38:33.178381Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T11:38:33.178668Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"856b77cd5251110c","initial-advertise-peer-urls":["https://192.168.50.7:2380"],"listen-peer-urls":["https://192.168.50.7:2380"],"advertise-client-urls":["https://192.168.50.7:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.7:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T11:38:33.178717Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T11:38:33.178868Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-05-20T11:38:33.178893Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-05-20T11:38:33.181803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c switched to configuration voters=(9613909553285501196)"}
	{"level":"info","ts":"2024-05-20T11:38:33.182093Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b162f841703ff885","local-member-id":"856b77cd5251110c","added-peer-id":"856b77cd5251110c","added-peer-peer-urls":["https://192.168.50.7:2380"]}
	{"level":"info","ts":"2024-05-20T11:38:33.531676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-20T11:38:33.531786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-20T11:38:33.53184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgPreVoteResp from 856b77cd5251110c at term 1"}
	{"level":"info","ts":"2024-05-20T11:38:33.531879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T11:38:33.531904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgVoteResp from 856b77cd5251110c at term 2"}
	{"level":"info","ts":"2024-05-20T11:38:33.531931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became leader at term 2"}
	{"level":"info","ts":"2024-05-20T11:38:33.531956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 856b77cd5251110c elected leader 856b77cd5251110c at term 2"}
	{"level":"info","ts":"2024-05-20T11:38:33.535827Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"856b77cd5251110c","local-member-attributes":"{Name:pause-120159 ClientURLs:[https://192.168.50.7:2379]}","request-path":"/0/members/856b77cd5251110c/attributes","cluster-id":"b162f841703ff885","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T11:38:33.536039Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:38:33.536641Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:38:33.539032Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:38:33.544827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T11:38:33.5469Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b162f841703ff885","local-member-id":"856b77cd5251110c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:38:33.546975Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:38:33.547016Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:38:33.539982Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T11:38:33.547029Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T11:38:33.556333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.7:2379"}
	
	
	==> kernel <==
	 11:38:58 up 7 min,  0 users,  load average: 0.53, 0.53, 0.29
	Linux pause-120159 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d9f0d694e4ee089a2f98e661ffc57342adb95ebabb63e3ccb504e8bd1a7c998d] <==
	I0520 11:38:35.545818       1 autoregister_controller.go:141] Starting autoregister controller
	I0520 11:38:35.545859       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0520 11:38:35.545885       1 cache.go:39] Caches are synced for autoregister controller
	I0520 11:38:35.546564       1 shared_informer.go:320] Caches are synced for configmaps
	I0520 11:38:35.547346       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0520 11:38:35.548185       1 controller.go:615] quota admission added evaluator for: namespaces
	I0520 11:38:35.548994       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0520 11:38:35.549040       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0520 11:38:35.549351       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0520 11:38:35.594407       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0520 11:38:36.410851       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0520 11:38:36.416228       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0520 11:38:36.416316       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0520 11:38:36.954867       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0520 11:38:37.000766       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0520 11:38:37.144523       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0520 11:38:37.150875       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.7]
	I0520 11:38:37.151784       1 controller.go:615] quota admission added evaluator for: endpoints
	I0520 11:38:37.156420       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0520 11:38:37.465158       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0520 11:38:38.103139       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0520 11:38:38.142471       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0520 11:38:38.166522       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0520 11:38:51.866124       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0520 11:38:51.988465       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [be213ff811e3c54c1dfe90935bbc09061eb48e357e42756b906c66f098f8cb0d] <==
	I0520 11:38:51.833988       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-120159"
	I0520 11:38:51.834033       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0520 11:38:51.837679       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0520 11:38:51.882775       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0520 11:38:51.918837       1 shared_informer.go:320] Caches are synced for endpoint
	I0520 11:38:51.965397       1 shared_informer.go:320] Caches are synced for deployment
	I0520 11:38:51.965517       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0520 11:38:51.965560       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0520 11:38:51.970124       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0520 11:38:52.017030       1 shared_informer.go:320] Caches are synced for stateful set
	I0520 11:38:52.031229       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 11:38:52.063074       1 shared_informer.go:320] Caches are synced for disruption
	I0520 11:38:52.076268       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 11:38:52.154878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="151.328629ms"
	I0520 11:38:52.165024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="10.05078ms"
	I0520 11:38:52.169461       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="81.902µs"
	I0520 11:38:52.178752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.576µs"
	I0520 11:38:52.457200       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 11:38:52.514864       1 shared_informer.go:320] Caches are synced for garbage collector
	I0520 11:38:52.514888       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0520 11:38:53.217135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.294µs"
	I0520 11:38:53.311395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.532663ms"
	I0520 11:38:53.312743       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.878µs"
	I0520 11:38:53.331833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.699713ms"
	I0520 11:38:53.333159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.644µs"
	
	
	==> kube-proxy [421d33a1d4f982352ad0352681804eb3ac481c2455abe6657e1d92c0f7854824] <==
	I0520 11:38:52.596693       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:38:52.615110       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.7"]
	I0520 11:38:52.705758       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:38:52.705792       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:38:52.705807       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:38:52.708688       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:38:52.708880       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:38:52.708897       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:38:52.710430       1 config.go:192] "Starting service config controller"
	I0520 11:38:52.710448       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:38:52.710470       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:38:52.710473       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:38:52.711114       1 config.go:319] "Starting node config controller"
	I0520 11:38:52.711122       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:38:52.812997       1 shared_informer.go:320] Caches are synced for node config
	I0520 11:38:52.815730       1 shared_informer.go:320] Caches are synced for service config
	I0520 11:38:52.816041       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0f15f10b6f508dfb823ceb93c39c1a03b4b411112c84a1efb2ed7dc107a935cb] <==
	W0520 11:38:35.558893       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:38:35.559043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 11:38:35.559213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 11:38:35.559249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 11:38:35.559284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:38:35.559403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 11:38:35.559385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:38:35.559546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:38:36.482009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 11:38:36.482065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0520 11:38:36.482007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:38:36.482090       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:38:36.484244       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:38:36.484327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 11:38:36.493852       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 11:38:36.493902       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 11:38:36.494119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:38:36.494180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 11:38:36.629862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 11:38:36.629912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0520 11:38:36.674088       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 11:38:36.674212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 11:38:36.709357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0520 11:38:36.709470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0520 11:38:39.740526       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 11:38:38 pause-120159 kubelet[10896]: I0520 11:38:38.337104   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a3625f506094e6e70bf9a09a5ff7fab-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-120159\" (UID: \"9a3625f506094e6e70bf9a09a5ff7fab\") " pod="kube-system/kube-controller-manager-pause-120159"
	May 20 11:38:39 pause-120159 kubelet[10896]: I0520 11:38:39.009331   10896 apiserver.go:52] "Watching apiserver"
	May 20 11:38:39 pause-120159 kubelet[10896]: I0520 11:38:39.035659   10896 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 20 11:38:39 pause-120159 kubelet[10896]: E0520 11:38:39.166962   10896 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-120159\" already exists" pod="kube-system/kube-scheduler-pause-120159"
	May 20 11:38:39 pause-120159 kubelet[10896]: E0520 11:38:39.172652   10896 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-120159\" already exists" pod="kube-system/kube-apiserver-pause-120159"
	May 20 11:38:39 pause-120159 kubelet[10896]: I0520 11:38:39.250032   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-120159" podStartSLOduration=1.2500004 podStartE2EDuration="1.2500004s" podCreationTimestamp="2024-05-20 11:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:39.228048013 +0000 UTC m=+1.316700045" watchObservedRunningTime="2024-05-20 11:38:39.2500004 +0000 UTC m=+1.338652428"
	May 20 11:38:39 pause-120159 kubelet[10896]: I0520 11:38:39.261836   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-120159" podStartSLOduration=1.261818431 podStartE2EDuration="1.261818431s" podCreationTimestamp="2024-05-20 11:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:39.251094904 +0000 UTC m=+1.339746947" watchObservedRunningTime="2024-05-20 11:38:39.261818431 +0000 UTC m=+1.350470460"
	May 20 11:38:39 pause-120159 kubelet[10896]: I0520 11:38:39.272852   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-120159" podStartSLOduration=3.272835086 podStartE2EDuration="3.272835086s" podCreationTimestamp="2024-05-20 11:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:39.262279739 +0000 UTC m=+1.350931771" watchObservedRunningTime="2024-05-20 11:38:39.272835086 +0000 UTC m=+1.361487110"
	May 20 11:38:39 pause-120159 kubelet[10896]: I0520 11:38:39.285050   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-120159" podStartSLOduration=1.285034108 podStartE2EDuration="1.285034108s" podCreationTimestamp="2024-05-20 11:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:39.273558779 +0000 UTC m=+1.362210812" watchObservedRunningTime="2024-05-20 11:38:39.285034108 +0000 UTC m=+1.373686140"
	May 20 11:38:51 pause-120159 kubelet[10896]: I0520 11:38:51.921180   10896 topology_manager.go:215] "Topology Admit Handler" podUID="eb2d6eed-e833-4f59-b070-efe984dd2d32" podNamespace="kube-system" podName="kube-proxy-28q5d"
	May 20 11:38:51 pause-120159 kubelet[10896]: I0520 11:38:51.936516   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2725\" (UniqueName: \"kubernetes.io/projected/eb2d6eed-e833-4f59-b070-efe984dd2d32-kube-api-access-k2725\") pod \"kube-proxy-28q5d\" (UID: \"eb2d6eed-e833-4f59-b070-efe984dd2d32\") " pod="kube-system/kube-proxy-28q5d"
	May 20 11:38:51 pause-120159 kubelet[10896]: I0520 11:38:51.936577   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb2d6eed-e833-4f59-b070-efe984dd2d32-xtables-lock\") pod \"kube-proxy-28q5d\" (UID: \"eb2d6eed-e833-4f59-b070-efe984dd2d32\") " pod="kube-system/kube-proxy-28q5d"
	May 20 11:38:51 pause-120159 kubelet[10896]: I0520 11:38:51.936641   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb2d6eed-e833-4f59-b070-efe984dd2d32-lib-modules\") pod \"kube-proxy-28q5d\" (UID: \"eb2d6eed-e833-4f59-b070-efe984dd2d32\") " pod="kube-system/kube-proxy-28q5d"
	May 20 11:38:51 pause-120159 kubelet[10896]: I0520 11:38:51.936659   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb2d6eed-e833-4f59-b070-efe984dd2d32-kube-proxy\") pod \"kube-proxy-28q5d\" (UID: \"eb2d6eed-e833-4f59-b070-efe984dd2d32\") " pod="kube-system/kube-proxy-28q5d"
	May 20 11:38:52 pause-120159 kubelet[10896]: I0520 11:38:52.066050   10896 topology_manager.go:215] "Topology Admit Handler" podUID="60a35b25-1142-4747-9d9e-8b681f1b0a4f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k4bcq"
	May 20 11:38:52 pause-120159 kubelet[10896]: I0520 11:38:52.135744   10896 topology_manager.go:215] "Topology Admit Handler" podUID="5aa9b938-1a9f-4278-80c6-ede8bffac8c5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n2sf5"
	May 20 11:38:52 pause-120159 kubelet[10896]: I0520 11:38:52.137756   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8fv8\" (UniqueName: \"kubernetes.io/projected/60a35b25-1142-4747-9d9e-8b681f1b0a4f-kube-api-access-m8fv8\") pod \"coredns-7db6d8ff4d-k4bcq\" (UID: \"60a35b25-1142-4747-9d9e-8b681f1b0a4f\") " pod="kube-system/coredns-7db6d8ff4d-k4bcq"
	May 20 11:38:52 pause-120159 kubelet[10896]: I0520 11:38:52.137805   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a35b25-1142-4747-9d9e-8b681f1b0a4f-config-volume\") pod \"coredns-7db6d8ff4d-k4bcq\" (UID: \"60a35b25-1142-4747-9d9e-8b681f1b0a4f\") " pod="kube-system/coredns-7db6d8ff4d-k4bcq"
	May 20 11:38:52 pause-120159 kubelet[10896]: I0520 11:38:52.238983   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa9b938-1a9f-4278-80c6-ede8bffac8c5-config-volume\") pod \"coredns-7db6d8ff4d-n2sf5\" (UID: \"5aa9b938-1a9f-4278-80c6-ede8bffac8c5\") " pod="kube-system/coredns-7db6d8ff4d-n2sf5"
	May 20 11:38:52 pause-120159 kubelet[10896]: I0520 11:38:52.239024   10896 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kjkl\" (UniqueName: \"kubernetes.io/projected/5aa9b938-1a9f-4278-80c6-ede8bffac8c5-kube-api-access-4kjkl\") pod \"coredns-7db6d8ff4d-n2sf5\" (UID: \"5aa9b938-1a9f-4278-80c6-ede8bffac8c5\") " pod="kube-system/coredns-7db6d8ff4d-n2sf5"
	May 20 11:38:53 pause-120159 kubelet[10896]: I0520 11:38:53.253222   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-n2sf5" podStartSLOduration=1.253196096 podStartE2EDuration="1.253196096s" podCreationTimestamp="2024-05-20 11:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:53.218187369 +0000 UTC m=+15.306839400" watchObservedRunningTime="2024-05-20 11:38:53.253196096 +0000 UTC m=+15.341848128"
	May 20 11:38:53 pause-120159 kubelet[10896]: I0520 11:38:53.285500   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-28q5d" podStartSLOduration=2.2854812620000002 podStartE2EDuration="2.285481262s" podCreationTimestamp="2024-05-20 11:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:53.254036888 +0000 UTC m=+15.342688919" watchObservedRunningTime="2024-05-20 11:38:53.285481262 +0000 UTC m=+15.374133291"
	May 20 11:38:53 pause-120159 kubelet[10896]: I0520 11:38:53.317701   10896 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-k4bcq" podStartSLOduration=1.31751994 podStartE2EDuration="1.31751994s" podCreationTimestamp="2024-05-20 11:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-20 11:38:53.286358007 +0000 UTC m=+15.375010039" watchObservedRunningTime="2024-05-20 11:38:53.31751994 +0000 UTC m=+15.406171974"
	May 20 11:38:58 pause-120159 kubelet[10896]: I0520 11:38:58.469157   10896 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 20 11:38:58 pause-120159 kubelet[10896]: I0520 11:38:58.470315   10896 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-120159 -n pause-120159
helpers_test.go:261: (dbg) Run:  kubectl --context pause-120159 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (378.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (269.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-210420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-210420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m29.164095126s)

                                                
                                                
-- stdout --
	* [old-k8s-version-210420] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-210420" primary control-plane node in "old-k8s-version-210420" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:36:34.924585   54732 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:36:34.924695   54732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:36:34.924703   54732 out.go:304] Setting ErrFile to fd 2...
	I0520 11:36:34.924708   54732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:36:34.924867   54732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:36:34.925474   54732 out.go:298] Setting JSON to false
	I0520 11:36:34.926335   54732 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4738,"bootTime":1716200257,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:36:34.926390   54732 start.go:139] virtualization: kvm guest
	I0520 11:36:34.928768   54732 out.go:177] * [old-k8s-version-210420] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:36:34.930252   54732 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:36:34.930263   54732 notify.go:220] Checking for updates...
	I0520 11:36:34.931679   54732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:36:34.933173   54732 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:36:34.934463   54732 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:36:34.935908   54732 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:36:34.937216   54732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:36:34.939062   54732 config.go:182] Loaded profile config "cert-expiration-355991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:36:34.939195   54732 config.go:182] Loaded profile config "kubernetes-upgrade-854547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:36:34.939359   54732 config.go:182] Loaded profile config "pause-120159": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:36:34.939456   54732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:36:34.977941   54732 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 11:36:34.979409   54732 start.go:297] selected driver: kvm2
	I0520 11:36:34.979437   54732 start.go:901] validating driver "kvm2" against <nil>
	I0520 11:36:34.979450   54732 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:36:34.980387   54732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:34.980479   54732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:36:34.995908   54732 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:36:34.995989   54732 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 11:36:34.996261   54732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:36:34.996298   54732 cni.go:84] Creating CNI manager for ""
	I0520 11:36:34.996308   54732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:36:34.996321   54732 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 11:36:34.996392   54732 start.go:340] cluster config:
	{Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:36:34.996529   54732 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:36:34.998465   54732 out.go:177] * Starting "old-k8s-version-210420" primary control-plane node in "old-k8s-version-210420" cluster
	I0520 11:36:34.999693   54732 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:36:34.999727   54732 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0520 11:36:34.999733   54732 cache.go:56] Caching tarball of preloaded images
	I0520 11:36:34.999804   54732 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:36:34.999820   54732 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0520 11:36:34.999919   54732 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:36:34.999937   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json: {Name:mkba1d4dc9330c5a6116d6f4725e66ffa908c3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:36:35.000083   54732 start.go:360] acquireMachinesLock for old-k8s-version-210420: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:36:35.000121   54732 start.go:364] duration metric: took 18.485µs to acquireMachinesLock for "old-k8s-version-210420"
	I0520 11:36:35.000143   54732 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:36:35.000199   54732 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 11:36:35.002725   54732 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 11:36:35.002882   54732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:36:35.002925   54732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:36:35.017943   54732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35777
	I0520 11:36:35.018361   54732 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:36:35.018933   54732 main.go:141] libmachine: Using API Version  1
	I0520 11:36:35.018957   54732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:36:35.019326   54732 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:36:35.019531   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:36:35.019673   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:36:35.019868   54732 start.go:159] libmachine.API.Create for "old-k8s-version-210420" (driver="kvm2")
	I0520 11:36:35.019898   54732 client.go:168] LocalClient.Create starting
	I0520 11:36:35.019933   54732 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 11:36:35.019975   54732 main.go:141] libmachine: Decoding PEM data...
	I0520 11:36:35.019998   54732 main.go:141] libmachine: Parsing certificate...
	I0520 11:36:35.020074   54732 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 11:36:35.020101   54732 main.go:141] libmachine: Decoding PEM data...
	I0520 11:36:35.020120   54732 main.go:141] libmachine: Parsing certificate...
	I0520 11:36:35.020144   54732 main.go:141] libmachine: Running pre-create checks...
	I0520 11:36:35.020156   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .PreCreateCheck
	I0520 11:36:35.020505   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:36:35.020895   54732 main.go:141] libmachine: Creating machine...
	I0520 11:36:35.020908   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .Create
	I0520 11:36:35.021045   54732 main.go:141] libmachine: (old-k8s-version-210420) Creating KVM machine...
	I0520 11:36:35.022377   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found existing default KVM network
	I0520 11:36:35.023751   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.023592   54755 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f9:d0:04} reservation:<nil>}
	I0520 11:36:35.024764   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.024687   54755 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:87:e9:79} reservation:<nil>}
	I0520 11:36:35.026003   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.025924   54755 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:e6:f9} reservation:<nil>}
	I0520 11:36:35.028479   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.028403   54755 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0520 11:36:35.029634   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.029565   54755 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014d70}
	I0520 11:36:35.029659   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | created network xml: 
	I0520 11:36:35.029667   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | <network>
	I0520 11:36:35.029675   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |   <name>mk-old-k8s-version-210420</name>
	I0520 11:36:35.029680   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |   <dns enable='no'/>
	I0520 11:36:35.029685   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |   
	I0520 11:36:35.029700   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0520 11:36:35.029714   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |     <dhcp>
	I0520 11:36:35.029726   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0520 11:36:35.029740   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |     </dhcp>
	I0520 11:36:35.029748   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |   </ip>
	I0520 11:36:35.029753   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG |   
	I0520 11:36:35.029761   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | </network>
	I0520 11:36:35.029771   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | 
	I0520 11:36:35.035567   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | trying to create private KVM network mk-old-k8s-version-210420 192.168.83.0/24...
	I0520 11:36:35.110126   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420 ...
	I0520 11:36:35.110161   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | private KVM network mk-old-k8s-version-210420 192.168.83.0/24 created
	I0520 11:36:35.110185   54732 main.go:141] libmachine: (old-k8s-version-210420) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 11:36:35.110205   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.110056   54755 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:36:35.110259   54732 main.go:141] libmachine: (old-k8s-version-210420) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 11:36:35.361587   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.361434   54755 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa...
	I0520 11:36:35.721187   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.721073   54755 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/old-k8s-version-210420.rawdisk...
	I0520 11:36:35.721215   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Writing magic tar header
	I0520 11:36:35.721232   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Writing SSH key tar header
	I0520 11:36:35.721245   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:35.721174   54755 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420 ...
	I0520 11:36:35.721274   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420
	I0520 11:36:35.721313   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420 (perms=drwx------)
	I0520 11:36:35.721328   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 11:36:35.721339   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 11:36:35.721394   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 11:36:35.721426   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:36:35.721441   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 11:36:35.721457   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 11:36:35.721471   54732 main.go:141] libmachine: (old-k8s-version-210420) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 11:36:35.721482   54732 main.go:141] libmachine: (old-k8s-version-210420) Creating domain...
	I0520 11:36:35.721500   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 11:36:35.721512   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 11:36:35.721526   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home/jenkins
	I0520 11:36:35.721542   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Checking permissions on dir: /home
	I0520 11:36:35.721571   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Skipping /home - not owner
	I0520 11:36:35.722625   54732 main.go:141] libmachine: (old-k8s-version-210420) define libvirt domain using xml: 
	I0520 11:36:35.722650   54732 main.go:141] libmachine: (old-k8s-version-210420) <domain type='kvm'>
	I0520 11:36:35.722662   54732 main.go:141] libmachine: (old-k8s-version-210420)   <name>old-k8s-version-210420</name>
	I0520 11:36:35.722671   54732 main.go:141] libmachine: (old-k8s-version-210420)   <memory unit='MiB'>2200</memory>
	I0520 11:36:35.722680   54732 main.go:141] libmachine: (old-k8s-version-210420)   <vcpu>2</vcpu>
	I0520 11:36:35.722688   54732 main.go:141] libmachine: (old-k8s-version-210420)   <features>
	I0520 11:36:35.722696   54732 main.go:141] libmachine: (old-k8s-version-210420)     <acpi/>
	I0520 11:36:35.722703   54732 main.go:141] libmachine: (old-k8s-version-210420)     <apic/>
	I0520 11:36:35.722713   54732 main.go:141] libmachine: (old-k8s-version-210420)     <pae/>
	I0520 11:36:35.722718   54732 main.go:141] libmachine: (old-k8s-version-210420)     
	I0520 11:36:35.722727   54732 main.go:141] libmachine: (old-k8s-version-210420)   </features>
	I0520 11:36:35.722732   54732 main.go:141] libmachine: (old-k8s-version-210420)   <cpu mode='host-passthrough'>
	I0520 11:36:35.722759   54732 main.go:141] libmachine: (old-k8s-version-210420)   
	I0520 11:36:35.722794   54732 main.go:141] libmachine: (old-k8s-version-210420)   </cpu>
	I0520 11:36:35.722804   54732 main.go:141] libmachine: (old-k8s-version-210420)   <os>
	I0520 11:36:35.722815   54732 main.go:141] libmachine: (old-k8s-version-210420)     <type>hvm</type>
	I0520 11:36:35.722833   54732 main.go:141] libmachine: (old-k8s-version-210420)     <boot dev='cdrom'/>
	I0520 11:36:35.722844   54732 main.go:141] libmachine: (old-k8s-version-210420)     <boot dev='hd'/>
	I0520 11:36:35.722854   54732 main.go:141] libmachine: (old-k8s-version-210420)     <bootmenu enable='no'/>
	I0520 11:36:35.722868   54732 main.go:141] libmachine: (old-k8s-version-210420)   </os>
	I0520 11:36:35.722881   54732 main.go:141] libmachine: (old-k8s-version-210420)   <devices>
	I0520 11:36:35.722888   54732 main.go:141] libmachine: (old-k8s-version-210420)     <disk type='file' device='cdrom'>
	I0520 11:36:35.722905   54732 main.go:141] libmachine: (old-k8s-version-210420)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/boot2docker.iso'/>
	I0520 11:36:35.722916   54732 main.go:141] libmachine: (old-k8s-version-210420)       <target dev='hdc' bus='scsi'/>
	I0520 11:36:35.722925   54732 main.go:141] libmachine: (old-k8s-version-210420)       <readonly/>
	I0520 11:36:35.722935   54732 main.go:141] libmachine: (old-k8s-version-210420)     </disk>
	I0520 11:36:35.722945   54732 main.go:141] libmachine: (old-k8s-version-210420)     <disk type='file' device='disk'>
	I0520 11:36:35.722963   54732 main.go:141] libmachine: (old-k8s-version-210420)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 11:36:35.722981   54732 main.go:141] libmachine: (old-k8s-version-210420)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/old-k8s-version-210420.rawdisk'/>
	I0520 11:36:35.722993   54732 main.go:141] libmachine: (old-k8s-version-210420)       <target dev='hda' bus='virtio'/>
	I0520 11:36:35.723005   54732 main.go:141] libmachine: (old-k8s-version-210420)     </disk>
	I0520 11:36:35.723016   54732 main.go:141] libmachine: (old-k8s-version-210420)     <interface type='network'>
	I0520 11:36:35.723033   54732 main.go:141] libmachine: (old-k8s-version-210420)       <source network='mk-old-k8s-version-210420'/>
	I0520 11:36:35.723048   54732 main.go:141] libmachine: (old-k8s-version-210420)       <model type='virtio'/>
	I0520 11:36:35.723059   54732 main.go:141] libmachine: (old-k8s-version-210420)     </interface>
	I0520 11:36:35.723073   54732 main.go:141] libmachine: (old-k8s-version-210420)     <interface type='network'>
	I0520 11:36:35.723090   54732 main.go:141] libmachine: (old-k8s-version-210420)       <source network='default'/>
	I0520 11:36:35.723098   54732 main.go:141] libmachine: (old-k8s-version-210420)       <model type='virtio'/>
	I0520 11:36:35.723103   54732 main.go:141] libmachine: (old-k8s-version-210420)     </interface>
	I0520 11:36:35.723110   54732 main.go:141] libmachine: (old-k8s-version-210420)     <serial type='pty'>
	I0520 11:36:35.723135   54732 main.go:141] libmachine: (old-k8s-version-210420)       <target port='0'/>
	I0520 11:36:35.723156   54732 main.go:141] libmachine: (old-k8s-version-210420)     </serial>
	I0520 11:36:35.723170   54732 main.go:141] libmachine: (old-k8s-version-210420)     <console type='pty'>
	I0520 11:36:35.723180   54732 main.go:141] libmachine: (old-k8s-version-210420)       <target type='serial' port='0'/>
	I0520 11:36:35.723193   54732 main.go:141] libmachine: (old-k8s-version-210420)     </console>
	I0520 11:36:35.723227   54732 main.go:141] libmachine: (old-k8s-version-210420)     <rng model='virtio'>
	I0520 11:36:35.723241   54732 main.go:141] libmachine: (old-k8s-version-210420)       <backend model='random'>/dev/random</backend>
	I0520 11:36:35.723251   54732 main.go:141] libmachine: (old-k8s-version-210420)     </rng>
	I0520 11:36:35.723261   54732 main.go:141] libmachine: (old-k8s-version-210420)     
	I0520 11:36:35.723269   54732 main.go:141] libmachine: (old-k8s-version-210420)     
	I0520 11:36:35.723279   54732 main.go:141] libmachine: (old-k8s-version-210420)   </devices>
	I0520 11:36:35.723290   54732 main.go:141] libmachine: (old-k8s-version-210420) </domain>
	I0520 11:36:35.723299   54732 main.go:141] libmachine: (old-k8s-version-210420) 
	I0520 11:36:35.727234   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:7a:d0:98 in network default
	I0520 11:36:35.727750   54732 main.go:141] libmachine: (old-k8s-version-210420) Ensuring networks are active...
	I0520 11:36:35.727768   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:35.728436   54732 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network default is active
	I0520 11:36:35.728679   54732 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network mk-old-k8s-version-210420 is active
	I0520 11:36:35.729118   54732 main.go:141] libmachine: (old-k8s-version-210420) Getting domain xml...
	I0520 11:36:35.729706   54732 main.go:141] libmachine: (old-k8s-version-210420) Creating domain...
	I0520 11:36:37.054514   54732 main.go:141] libmachine: (old-k8s-version-210420) Waiting to get IP...
	I0520 11:36:37.055222   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:37.055663   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:37.055693   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:37.055611   54755 retry.go:31] will retry after 250.742445ms: waiting for machine to come up
	I0520 11:36:37.308328   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:37.308967   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:37.308995   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:37.308910   54755 retry.go:31] will retry after 389.201353ms: waiting for machine to come up
	I0520 11:36:37.699518   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:37.699981   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:37.700011   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:37.699941   54755 retry.go:31] will retry after 435.785471ms: waiting for machine to come up
	I0520 11:36:38.137687   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:38.138161   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:38.138190   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:38.138123   54755 retry.go:31] will retry after 521.950828ms: waiting for machine to come up
	I0520 11:36:38.661976   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:38.662533   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:38.662556   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:38.662488   54755 retry.go:31] will retry after 628.547713ms: waiting for machine to come up
	I0520 11:36:39.292067   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:39.292558   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:39.292582   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:39.292508   54755 retry.go:31] will retry after 602.715223ms: waiting for machine to come up
	I0520 11:36:39.897344   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:39.897833   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:39.897862   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:39.897782   54755 retry.go:31] will retry after 880.165132ms: waiting for machine to come up
	I0520 11:36:40.779736   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:40.780219   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:40.780248   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:40.780178   54755 retry.go:31] will retry after 1.272197275s: waiting for machine to come up
	I0520 11:36:42.054726   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:42.055207   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:42.055229   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:42.055143   54755 retry.go:31] will retry after 1.67461889s: waiting for machine to come up
	I0520 11:36:43.731060   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:43.731544   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:43.731573   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:43.731493   54755 retry.go:31] will retry after 2.249865047s: waiting for machine to come up
	I0520 11:36:45.983452   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:45.984016   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:45.984068   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:45.983968   54755 retry.go:31] will retry after 2.737409087s: waiting for machine to come up
	I0520 11:36:48.722775   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:48.723225   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:48.723255   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:48.723183   54755 retry.go:31] will retry after 2.212815155s: waiting for machine to come up
	I0520 11:36:50.938087   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:50.938585   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:50.938615   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:50.938538   54755 retry.go:31] will retry after 3.187157159s: waiting for machine to come up
	I0520 11:36:54.127263   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:54.127657   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:36:54.127683   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:36:54.127614   54755 retry.go:31] will retry after 4.331061914s: waiting for machine to come up
	I0520 11:36:58.463061   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.463494   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has current primary IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.463530   54732 main.go:141] libmachine: (old-k8s-version-210420) Found IP for machine: 192.168.83.100
	I0520 11:36:58.463539   54732 main.go:141] libmachine: (old-k8s-version-210420) Reserving static IP address...
	I0520 11:36:58.463857   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"} in network mk-old-k8s-version-210420
	I0520 11:36:58.538368   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Getting to WaitForSSH function...
	I0520 11:36:58.538404   54732 main.go:141] libmachine: (old-k8s-version-210420) Reserved static IP address: 192.168.83.100
	I0520 11:36:58.538417   54732 main.go:141] libmachine: (old-k8s-version-210420) Waiting for SSH to be available...
	I0520 11:36:58.540887   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.541361   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:58.541394   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.541531   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH client type: external
	I0520 11:36:58.541561   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa (-rw-------)
	I0520 11:36:58.541590   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:36:58.541614   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | About to run SSH command:
	I0520 11:36:58.541632   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | exit 0
	I0520 11:36:58.669046   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | SSH cmd err, output: <nil>: 
	I0520 11:36:58.669304   54732 main.go:141] libmachine: (old-k8s-version-210420) KVM machine creation complete!
	I0520 11:36:58.669649   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:36:58.670148   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:36:58.670359   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:36:58.670540   54732 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 11:36:58.670559   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetState
	I0520 11:36:58.671802   54732 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 11:36:58.671815   54732 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 11:36:58.671820   54732 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 11:36:58.671826   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:58.673985   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.674327   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:58.674357   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.674419   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:58.674570   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.674717   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.674863   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:58.675080   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:58.675315   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:58.675331   54732 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 11:36:58.788510   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:36:58.788531   54732 main.go:141] libmachine: Detecting the provisioner...
	I0520 11:36:58.788538   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:58.791777   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.792093   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:58.792120   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.792298   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:58.792515   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.792694   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.792833   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:58.792980   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:58.793159   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:58.793169   54732 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 11:36:58.905732   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 11:36:58.905795   54732 main.go:141] libmachine: found compatible host: buildroot
	I0520 11:36:58.905808   54732 main.go:141] libmachine: Provisioning with buildroot...
	I0520 11:36:58.905817   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:36:58.906037   54732 buildroot.go:166] provisioning hostname "old-k8s-version-210420"
	I0520 11:36:58.906069   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:36:58.906235   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:58.908812   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.909252   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:58.909279   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:58.909457   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:58.909640   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.909803   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:58.909963   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:58.910137   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:58.910360   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:58.910382   54732 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210420 && echo "old-k8s-version-210420" | sudo tee /etc/hostname
	I0520 11:36:59.043365   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210420
	
	I0520 11:36:59.043395   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:59.046494   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.047005   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.047029   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.047181   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:59.047365   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.047541   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.047678   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:59.047866   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:59.048197   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:59.048223   54732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:36:59.175659   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:36:59.175698   54732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:36:59.175720   54732 buildroot.go:174] setting up certificates
	I0520 11:36:59.175734   54732 provision.go:84] configureAuth start
	I0520 11:36:59.175750   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:36:59.176024   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:36:59.179114   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.179481   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.179513   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.179636   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:59.182365   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.182693   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.182728   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.182892   54732 provision.go:143] copyHostCerts
	I0520 11:36:59.182948   54732 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:36:59.182963   54732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:36:59.183017   54732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:36:59.183120   54732 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:36:59.183130   54732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:36:59.183162   54732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:36:59.183225   54732 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:36:59.183232   54732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:36:59.183252   54732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:36:59.183325   54732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210420 san=[127.0.0.1 192.168.83.100 localhost minikube old-k8s-version-210420]
	I0520 11:36:59.516574   54732 provision.go:177] copyRemoteCerts
	I0520 11:36:59.516648   54732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:36:59.516694   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:59.520890   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.521316   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.521452   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.521534   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:59.521799   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.521994   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:59.522137   54732 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:36:59.615162   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:36:59.639370   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 11:36:59.669948   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:36:59.697465   54732 provision.go:87] duration metric: took 521.714661ms to configureAuth
	I0520 11:36:59.697495   54732 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:36:59.697690   54732 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:36:59.697772   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:36:59.700598   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.700997   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:36:59.701028   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:36:59.701242   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:36:59.701414   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.701561   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:36:59.701663   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:36:59.701879   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:36:59.702086   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:36:59.702102   54732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:37:00.030830   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:37:00.030863   54732 main.go:141] libmachine: Checking connection to Docker...
	I0520 11:37:00.030875   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetURL
	I0520 11:37:00.032252   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using libvirt version 6000000
	I0520 11:37:00.034407   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.034740   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.034776   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.034998   54732 main.go:141] libmachine: Docker is up and running!
	I0520 11:37:00.035022   54732 main.go:141] libmachine: Reticulating splines...
	I0520 11:37:00.035028   54732 client.go:171] duration metric: took 25.015121735s to LocalClient.Create
	I0520 11:37:00.035057   54732 start.go:167] duration metric: took 25.015189379s to libmachine.API.Create "old-k8s-version-210420"
	I0520 11:37:00.035076   54732 start.go:293] postStartSetup for "old-k8s-version-210420" (driver="kvm2")
	I0520 11:37:00.035090   54732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:37:00.035116   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.035406   54732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:37:00.035442   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:37:00.037625   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.037965   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.037999   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.038068   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:37:00.038248   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.038449   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:37:00.038628   54732 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:37:00.128149   54732 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:37:00.132646   54732 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:37:00.132670   54732 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:37:00.132739   54732 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:37:00.132838   54732 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:37:00.132998   54732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:37:00.142874   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:37:00.169803   54732 start.go:296] duration metric: took 134.712167ms for postStartSetup
	I0520 11:37:00.169866   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:37:00.170434   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:37:00.173263   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.173608   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.173633   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.173934   54732 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:37:00.174155   54732 start.go:128] duration metric: took 25.173945173s to createHost
	I0520 11:37:00.174185   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:37:00.176961   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.177375   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.177397   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.177708   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:37:00.177888   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.178074   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.178247   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:37:00.178432   54732 main.go:141] libmachine: Using SSH client type: native
	I0520 11:37:00.178642   54732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:37:00.178659   54732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 11:37:00.293928   54732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205020.272115335
	
	I0520 11:37:00.293949   54732 fix.go:216] guest clock: 1716205020.272115335
	I0520 11:37:00.293959   54732 fix.go:229] Guest: 2024-05-20 11:37:00.272115335 +0000 UTC Remote: 2024-05-20 11:37:00.174170865 +0000 UTC m=+25.282981176 (delta=97.94447ms)
	I0520 11:37:00.294015   54732 fix.go:200] guest clock delta is within tolerance: 97.94447ms
	I0520 11:37:00.294023   54732 start.go:83] releasing machines lock for "old-k8s-version-210420", held for 25.293893571s
	I0520 11:37:00.294050   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.294323   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:37:00.297134   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.297565   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.297596   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.297885   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.298445   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.298625   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:37:00.298696   54732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:37:00.298744   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:37:00.298828   54732 ssh_runner.go:195] Run: cat /version.json
	I0520 11:37:00.298860   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:37:00.301978   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.302405   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.302462   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.302490   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.302601   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:37:00.302743   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.302885   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:37:00.302980   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:00.303000   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:00.303029   54732 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:37:00.303310   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:37:00.303464   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:37:00.303598   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:37:00.303715   54732 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	W0520 11:37:00.415297   54732 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:37:00.415384   54732 ssh_runner.go:195] Run: systemctl --version
	I0520 11:37:00.422441   54732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:37:00.586815   54732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:37:00.594067   54732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:37:00.594148   54732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:37:00.610884   54732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:37:00.610907   54732 start.go:494] detecting cgroup driver to use...
	I0520 11:37:00.610976   54732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:37:00.628469   54732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:37:00.644580   54732 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:37:00.644642   54732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:37:00.659819   54732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:37:00.673915   54732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:37:00.804460   54732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:37:00.986774   54732 docker.go:233] disabling docker service ...
	I0520 11:37:00.986844   54732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:37:01.004845   54732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:37:01.019356   54732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:37:01.148680   54732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:37:01.269540   54732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:37:01.285312   54732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:37:01.304010   54732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 11:37:01.304084   54732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:01.316378   54732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:37:01.316451   54732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:01.327728   54732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:01.338698   54732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:37:01.349526   54732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:37:01.360959   54732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:37:01.373008   54732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:37:01.373060   54732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:37:01.386782   54732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:37:01.397163   54732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:37:01.518110   54732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:37:01.665579   54732 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:37:01.665652   54732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:37:01.670666   54732 start.go:562] Will wait 60s for crictl version
	I0520 11:37:01.670709   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:01.674485   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:37:01.728836   54732 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:37:01.728982   54732 ssh_runner.go:195] Run: crio --version
	I0520 11:37:01.760458   54732 ssh_runner.go:195] Run: crio --version
	I0520 11:37:01.792553   54732 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 11:37:01.793799   54732 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:37:01.796811   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:01.797255   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:36:50 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:37:01.797279   54732 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:37:01.797553   54732 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0520 11:37:01.801923   54732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:37:01.814795   54732 kubeadm.go:877] updating cluster {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:37:01.814936   54732 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:37:01.815002   54732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:37:01.846574   54732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:37:01.846646   54732 ssh_runner.go:195] Run: which lz4
	I0520 11:37:01.850943   54732 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 11:37:01.855547   54732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:37:01.855574   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 11:37:03.680746   54732 crio.go:462] duration metric: took 1.829828922s to copy over tarball
	I0520 11:37:03.680826   54732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:37:06.452881   54732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.77202524s)
	I0520 11:37:06.452914   54732 crio.go:469] duration metric: took 2.77213672s to extract the tarball
	I0520 11:37:06.452948   54732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:37:06.496108   54732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:37:06.542385   54732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:37:06.542414   54732 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:37:06.542466   54732 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:06.542500   54732 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.542517   54732 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:06.542528   54732 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.542579   54732 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 11:37:06.542620   54732 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.542508   54732 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.542588   54732 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 11:37:06.543881   54732 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.543932   54732 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.544060   54732 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 11:37:06.544066   54732 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.544057   54732 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:06.544093   54732 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.544088   54732 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 11:37:06.544234   54732 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:06.689722   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.692704   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.700348   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.715480   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.736341   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 11:37:06.740710   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:06.745468   54732 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 11:37:06.745506   54732 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.745544   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.755418   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 11:37:06.845909   54732 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 11:37:06.845942   54732 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 11:37:06.845961   54732 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.845977   54732 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.846011   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.846035   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.893251   54732 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 11:37:06.893310   54732 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 11:37:06.893290   54732 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 11:37:06.893347   54732 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.893361   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.893406   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.904838   54732 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 11:37:06.904904   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:37:06.904920   54732 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:06.904968   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.910151   54732 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 11:37:06.910188   54732 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 11:37:06.910215   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:37:06.910228   54732 ssh_runner.go:195] Run: which crictl
	I0520 11:37:06.910230   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:37:06.910288   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 11:37:06.910363   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:37:06.912545   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 11:37:07.003917   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 11:37:07.021998   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 11:37:07.038760   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 11:37:07.038853   54732 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 11:37:07.038887   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 11:37:07.039429   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 11:37:07.042472   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 11:37:07.075138   54732 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 11:37:07.461848   54732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:37:07.605217   54732 cache_images.go:92] duration metric: took 1.062783538s to LoadCachedImages
	W0520 11:37:07.605318   54732 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0520 11:37:07.605336   54732 kubeadm.go:928] updating node { 192.168.83.100 8443 v1.20.0 crio true true} ...
	I0520 11:37:07.605472   54732 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:37:07.605583   54732 ssh_runner.go:195] Run: crio config
	I0520 11:37:07.668258   54732 cni.go:84] Creating CNI manager for ""
	I0520 11:37:07.668286   54732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:37:07.668310   54732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:37:07.668337   54732 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.100 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210420 NodeName:old-k8s-version-210420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:37:07.668520   54732 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210420"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:37:07.668591   54732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:37:07.679647   54732 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:37:07.679713   54732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:37:07.689918   54732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0520 11:37:07.711056   54732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:37:07.730271   54732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0520 11:37:07.751969   54732 ssh_runner.go:195] Run: grep 192.168.83.100	control-plane.minikube.internal$ /etc/hosts
	I0520 11:37:07.756727   54732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:37:07.771312   54732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:37:07.893449   54732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:37:07.911536   54732 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420 for IP: 192.168.83.100
	I0520 11:37:07.911562   54732 certs.go:194] generating shared ca certs ...
	I0520 11:37:07.911582   54732 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:07.911754   54732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:37:07.911807   54732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:37:07.911819   54732 certs.go:256] generating profile certs ...
	I0520 11:37:07.911894   54732 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key
	I0520 11:37:07.911913   54732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt with IP's: []
	I0520 11:37:08.010690   54732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt ...
	I0520 11:37:08.010774   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: {Name:mk37e4d8e6b70b9df4bd1f33a3bb1ce774741c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.010992   54732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key ...
	I0520 11:37:08.011048   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key: {Name:mkef658583b929f627b0a4742ad863dab8569cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.011194   54732 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9
	I0520 11:37:08.011251   54732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt.a7d5d0f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.100]
	I0520 11:37:08.123361   54732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt.a7d5d0f9 ...
	I0520 11:37:08.123391   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt.a7d5d0f9: {Name:mk126bf7eda8dbc6aeee644e3e54901634a644ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.123558   54732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9 ...
	I0520 11:37:08.123577   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9: {Name:mkf6c45a2b2e90a3cb664c8a2d9327ba46d6c43e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.123773   54732 certs.go:381] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt.a7d5d0f9 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt
	I0520 11:37:08.123864   54732 certs.go:385] copying /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9 -> /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key
	I0520 11:37:08.123915   54732 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key
	I0520 11:37:08.123933   54732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt with IP's: []
	I0520 11:37:08.287603   54732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt ...
	I0520 11:37:08.287636   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt: {Name:mk5a61fe0b766f2d87c141b576ed804ead7b2cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.287789   54732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key ...
	I0520 11:37:08.287803   54732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key: {Name:mk6ecec165027e23fe429021af4b2049861c5527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:37:08.287990   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:37:08.288041   54732 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:37:08.288052   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:37:08.288094   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:37:08.288129   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:37:08.288162   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:37:08.288214   54732 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:37:08.288774   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:37:08.323616   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:37:08.353034   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:37:08.383245   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:37:08.412983   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 11:37:08.443285   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:37:08.474262   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:37:08.507794   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 11:37:08.542166   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:37:08.572287   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:37:08.603134   54732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:37:08.633088   54732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:37:08.650484   54732 ssh_runner.go:195] Run: openssl version
	I0520 11:37:08.657729   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:37:08.669476   54732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:08.674540   54732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:08.674594   54732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:37:08.682466   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:37:08.694751   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:37:08.709664   54732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:37:08.715955   54732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:37:08.716013   54732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:37:08.724137   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:37:08.736195   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:37:08.747804   54732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:37:08.754314   54732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:37:08.754379   54732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:37:08.762305   54732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:37:08.777044   54732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:37:08.782996   54732 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 11:37:08.783051   54732 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:37:08.783153   54732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:37:08.783206   54732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:37:08.832683   54732 cri.go:89] found id: ""
	I0520 11:37:08.832748   54732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 11:37:08.843324   54732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:37:08.853560   54732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:37:08.863070   54732 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:37:08.863089   54732 kubeadm.go:156] found existing configuration files:
	
	I0520 11:37:08.863140   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:37:08.876732   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:37:08.876795   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:37:08.888527   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:37:08.904546   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:37:08.904618   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:37:08.920506   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:37:08.932922   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:37:08.932998   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:37:08.954646   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:37:08.966058   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:37:08.966132   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:37:08.978052   54732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:37:09.252760   54732 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:39:06.870452   54732 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:39:06.870569   54732 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 11:39:06.872019   54732 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:39:06.872105   54732 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:39:06.872214   54732 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:39:06.872397   54732 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:39:06.872485   54732 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:39:06.872538   54732 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:39:06.874474   54732 out.go:204]   - Generating certificates and keys ...
	I0520 11:39:06.874567   54732 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:39:06.874665   54732 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:39:06.874778   54732 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 11:39:06.874866   54732 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 11:39:06.874951   54732 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 11:39:06.875031   54732 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 11:39:06.875109   54732 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 11:39:06.875267   54732 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-210420] and IPs [192.168.83.100 127.0.0.1 ::1]
	I0520 11:39:06.875316   54732 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 11:39:06.875430   54732 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-210420] and IPs [192.168.83.100 127.0.0.1 ::1]
	I0520 11:39:06.875494   54732 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 11:39:06.875546   54732 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 11:39:06.875587   54732 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 11:39:06.875650   54732 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:39:06.875701   54732 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:39:06.875755   54732 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:39:06.875830   54732 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:39:06.875897   54732 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:39:06.876045   54732 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:39:06.876121   54732 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:39:06.876173   54732 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:39:06.876241   54732 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:39:06.877814   54732 out.go:204]   - Booting up control plane ...
	I0520 11:39:06.877901   54732 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:39:06.877977   54732 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:39:06.878035   54732 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:39:06.878112   54732 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:39:06.878339   54732 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:39:06.878421   54732 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:39:06.878525   54732 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:39:06.878750   54732 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:39:06.878810   54732 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:39:06.879031   54732 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:39:06.879097   54732 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:39:06.879264   54732 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:39:06.879331   54732 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:39:06.879523   54732 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:39:06.879596   54732 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:39:06.879768   54732 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:39:06.879776   54732 kubeadm.go:309] 
	I0520 11:39:06.879809   54732 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:39:06.879843   54732 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:39:06.879853   54732 kubeadm.go:309] 
	I0520 11:39:06.879883   54732 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:39:06.879916   54732 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:39:06.880008   54732 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:39:06.880019   54732 kubeadm.go:309] 
	I0520 11:39:06.880109   54732 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:39:06.880145   54732 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:39:06.880174   54732 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:39:06.880180   54732 kubeadm.go:309] 
	I0520 11:39:06.880282   54732 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:39:06.880357   54732 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:39:06.880363   54732 kubeadm.go:309] 
	I0520 11:39:06.880449   54732 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:39:06.880521   54732 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:39:06.880601   54732 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:39:06.880684   54732 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:39:06.880746   54732 kubeadm.go:309] 
	W0520 11:39:06.880820   54732 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-210420] and IPs [192.168.83.100 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-210420] and IPs [192.168.83.100 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-210420] and IPs [192.168.83.100 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-210420] and IPs [192.168.83.100 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0520 11:39:06.880872   54732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:39:07.352365   54732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:39:07.370353   54732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:39:07.383396   54732 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:39:07.383423   54732 kubeadm.go:156] found existing configuration files:
	
	I0520 11:39:07.383484   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:39:07.394548   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:39:07.394612   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:39:07.404757   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:39:07.414241   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:39:07.414291   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:39:07.423674   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:39:07.434052   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:39:07.434118   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:39:07.443817   54732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:39:07.453000   54732 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:39:07.453055   54732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:39:07.462906   54732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:39:07.695143   54732 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:41:03.444918   54732 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:41:03.445084   54732 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 11:41:03.446449   54732 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:41:03.446551   54732 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:41:03.446647   54732 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:41:03.446768   54732 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:41:03.446915   54732 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:41:03.447031   54732 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:41:03.448689   54732 out.go:204]   - Generating certificates and keys ...
	I0520 11:41:03.448753   54732 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:41:03.448808   54732 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:41:03.448875   54732 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:41:03.448970   54732 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:41:03.449051   54732 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:41:03.449129   54732 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:41:03.449209   54732 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:41:03.449318   54732 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:41:03.449434   54732 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:41:03.449536   54732 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:41:03.449598   54732 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:41:03.449676   54732 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:41:03.449742   54732 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:41:03.449820   54732 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:41:03.449907   54732 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:41:03.449976   54732 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:41:03.450121   54732 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:41:03.450253   54732 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:41:03.450321   54732 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:41:03.450427   54732 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:41:03.451864   54732 out.go:204]   - Booting up control plane ...
	I0520 11:41:03.451947   54732 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:41:03.452022   54732 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:41:03.452097   54732 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:41:03.452195   54732 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:41:03.452372   54732 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:41:03.452437   54732 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:41:03.452506   54732 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:41:03.452737   54732 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:41:03.452847   54732 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:41:03.453058   54732 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:41:03.453148   54732 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:41:03.453347   54732 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:41:03.453451   54732 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:41:03.453665   54732 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:41:03.453729   54732 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:41:03.453943   54732 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:41:03.453951   54732 kubeadm.go:309] 
	I0520 11:41:03.454012   54732 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:41:03.454059   54732 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:41:03.454074   54732 kubeadm.go:309] 
	I0520 11:41:03.454114   54732 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:41:03.454150   54732 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:41:03.454285   54732 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:41:03.454293   54732 kubeadm.go:309] 
	I0520 11:41:03.454384   54732 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:41:03.454437   54732 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:41:03.454477   54732 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:41:03.454485   54732 kubeadm.go:309] 
	I0520 11:41:03.454596   54732 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:41:03.454696   54732 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:41:03.454706   54732 kubeadm.go:309] 
	I0520 11:41:03.454840   54732 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:41:03.454933   54732 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:41:03.455017   54732 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:41:03.455083   54732 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:41:03.455125   54732 kubeadm.go:309] 
	I0520 11:41:03.455149   54732 kubeadm.go:393] duration metric: took 3m54.67210194s to StartCluster
	I0520 11:41:03.455189   54732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:41:03.455240   54732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:41:03.505729   54732 cri.go:89] found id: ""
	I0520 11:41:03.505754   54732 logs.go:276] 0 containers: []
	W0520 11:41:03.505762   54732 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:41:03.505768   54732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:41:03.505813   54732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:41:03.542383   54732 cri.go:89] found id: ""
	I0520 11:41:03.542413   54732 logs.go:276] 0 containers: []
	W0520 11:41:03.542424   54732 logs.go:278] No container was found matching "etcd"
	I0520 11:41:03.542432   54732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:41:03.542497   54732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:41:03.575843   54732 cri.go:89] found id: ""
	I0520 11:41:03.575867   54732 logs.go:276] 0 containers: []
	W0520 11:41:03.575876   54732 logs.go:278] No container was found matching "coredns"
	I0520 11:41:03.575881   54732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:41:03.575927   54732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:41:03.614187   54732 cri.go:89] found id: ""
	I0520 11:41:03.614213   54732 logs.go:276] 0 containers: []
	W0520 11:41:03.614220   54732 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:41:03.614226   54732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:41:03.614276   54732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:41:03.650186   54732 cri.go:89] found id: ""
	I0520 11:41:03.650210   54732 logs.go:276] 0 containers: []
	W0520 11:41:03.650222   54732 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:41:03.650227   54732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:41:03.650279   54732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:41:03.684191   54732 cri.go:89] found id: ""
	I0520 11:41:03.684222   54732 logs.go:276] 0 containers: []
	W0520 11:41:03.684231   54732 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:41:03.684239   54732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:41:03.684311   54732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:41:03.716280   54732 cri.go:89] found id: ""
	I0520 11:41:03.716311   54732 logs.go:276] 0 containers: []
	W0520 11:41:03.716322   54732 logs.go:278] No container was found matching "kindnet"
	I0520 11:41:03.716332   54732 logs.go:123] Gathering logs for container status ...
	I0520 11:41:03.716346   54732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:41:03.757209   54732 logs.go:123] Gathering logs for kubelet ...
	I0520 11:41:03.757237   54732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:41:03.808564   54732 logs.go:123] Gathering logs for dmesg ...
	I0520 11:41:03.808596   54732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:41:03.821715   54732 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:41:03.821738   54732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:41:03.938212   54732 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:41:03.938232   54732 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:41:03.938245   54732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0520 11:41:04.040204   54732 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0520 11:41:04.040259   54732 out.go:239] * 
	* 
	W0520 11:41:04.040326   54732 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:41:04.040357   54732 out.go:239] * 
	* 
	W0520 11:41:04.041244   54732 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:41:04.044166   54732 out.go:177] 
	W0520 11:41:04.045419   54732 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:41:04.045482   54732 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0520 11:41:04.045519   54732 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0520 11:41:04.047686   54732 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-210420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 6 (224.661073ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:41:04.314737   57578 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-210420" does not appear in /home/jenkins/minikube-integration/18925-3819/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-210420" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (269.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-814599 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-814599 --alsologtostderr -v=3: exit status 82 (2m0.555705866s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-814599"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:38:24.925277   55834 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:38:24.925533   55834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:38:24.925543   55834 out.go:304] Setting ErrFile to fd 2...
	I0520 11:38:24.925547   55834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:38:24.925725   55834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:38:24.925952   55834 out.go:298] Setting JSON to false
	I0520 11:38:24.926038   55834 mustload.go:65] Loading cluster: no-preload-814599
	I0520 11:38:24.926356   55834 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:38:24.926420   55834 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json ...
	I0520 11:38:24.926579   55834 mustload.go:65] Loading cluster: no-preload-814599
	I0520 11:38:24.926674   55834 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:38:24.926700   55834 stop.go:39] StopHost: no-preload-814599
	I0520 11:38:24.927059   55834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:38:24.927112   55834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:38:24.941609   55834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40177
	I0520 11:38:24.942043   55834 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:38:24.942589   55834 main.go:141] libmachine: Using API Version  1
	I0520 11:38:24.942611   55834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:38:24.942987   55834 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:38:24.945557   55834 out.go:177] * Stopping node "no-preload-814599"  ...
	I0520 11:38:24.946967   55834 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 11:38:24.947017   55834 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:38:24.947295   55834 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 11:38:24.947327   55834 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:38:24.950015   55834 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:38:24.950367   55834 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:37:15 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:38:24.950389   55834 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:38:24.950568   55834 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:38:24.950738   55834 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:38:24.950931   55834 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:38:24.951069   55834 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:38:25.073592   55834 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 11:38:25.128896   55834 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 11:38:25.208358   55834 main.go:141] libmachine: Stopping "no-preload-814599"...
	I0520 11:38:25.208392   55834 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:38:25.210272   55834 main.go:141] libmachine: (no-preload-814599) Calling .Stop
	I0520 11:38:25.214699   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 0/120
	I0520 11:38:26.216190   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 1/120
	I0520 11:38:27.218649   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 2/120
	I0520 11:38:28.220190   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 3/120
	I0520 11:38:29.222537   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 4/120
	I0520 11:38:30.224084   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 5/120
	I0520 11:38:31.225964   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 6/120
	I0520 11:38:32.228032   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 7/120
	I0520 11:38:33.230105   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 8/120
	I0520 11:38:34.231415   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 9/120
	I0520 11:38:35.233547   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 10/120
	I0520 11:38:36.235757   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 11/120
	I0520 11:38:37.237348   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 12/120
	I0520 11:38:38.239677   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 13/120
	I0520 11:38:39.241116   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 14/120
	I0520 11:38:40.242537   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 15/120
	I0520 11:38:41.244004   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 16/120
	I0520 11:38:42.246302   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 17/120
	I0520 11:38:43.247515   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 18/120
	I0520 11:38:44.249019   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 19/120
	I0520 11:38:45.251025   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 20/120
	I0520 11:38:46.252402   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 21/120
	I0520 11:38:47.254069   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 22/120
	I0520 11:38:48.255385   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 23/120
	I0520 11:38:49.256858   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 24/120
	I0520 11:38:50.258817   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 25/120
	I0520 11:38:51.260302   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 26/120
	I0520 11:38:52.261805   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 27/120
	I0520 11:38:53.263659   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 28/120
	I0520 11:38:54.265182   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 29/120
	I0520 11:38:55.267131   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 30/120
	I0520 11:38:56.268427   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 31/120
	I0520 11:38:57.269842   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 32/120
	I0520 11:38:58.271411   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 33/120
	I0520 11:38:59.273373   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 34/120
	I0520 11:39:00.288088   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 35/120
	I0520 11:39:01.289586   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 36/120
	I0520 11:39:02.291660   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 37/120
	I0520 11:39:03.292887   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 38/120
	I0520 11:39:04.294832   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 39/120
	I0520 11:39:05.297009   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 40/120
	I0520 11:39:06.298583   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 41/120
	I0520 11:39:07.300181   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 42/120
	I0520 11:39:08.301888   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 43/120
	I0520 11:39:09.303565   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 44/120
	I0520 11:39:10.305423   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 45/120
	I0520 11:39:11.307476   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 46/120
	I0520 11:39:12.308829   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 47/120
	I0520 11:39:13.310665   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 48/120
	I0520 11:39:14.312125   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 49/120
	I0520 11:39:15.314368   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 50/120
	I0520 11:39:16.316564   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 51/120
	I0520 11:39:17.318066   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 52/120
	I0520 11:39:18.319437   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 53/120
	I0520 11:39:19.321743   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 54/120
	I0520 11:39:20.323200   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 55/120
	I0520 11:39:21.324547   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 56/120
	I0520 11:39:22.325952   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 57/120
	I0520 11:39:23.327533   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 58/120
	I0520 11:39:24.329237   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 59/120
	I0520 11:39:25.331420   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 60/120
	I0520 11:39:26.333535   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 61/120
	I0520 11:39:27.335493   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 62/120
	I0520 11:39:28.336967   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 63/120
	I0520 11:39:29.338290   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 64/120
	I0520 11:39:30.339560   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 65/120
	I0520 11:39:31.341161   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 66/120
	I0520 11:39:32.343434   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 67/120
	I0520 11:39:33.344853   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 68/120
	I0520 11:39:34.346690   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 69/120
	I0520 11:39:35.348972   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 70/120
	I0520 11:39:36.350618   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 71/120
	I0520 11:39:37.352183   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 72/120
	I0520 11:39:38.354256   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 73/120
	I0520 11:39:39.355765   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 74/120
	I0520 11:39:40.357873   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 75/120
	I0520 11:39:41.359367   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 76/120
	I0520 11:39:42.361010   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 77/120
	I0520 11:39:43.362514   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 78/120
	I0520 11:39:44.364813   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 79/120
	I0520 11:39:45.366557   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 80/120
	I0520 11:39:46.368062   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 81/120
	I0520 11:39:47.370015   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 82/120
	I0520 11:39:48.371440   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 83/120
	I0520 11:39:49.373562   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 84/120
	I0520 11:39:50.375738   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 85/120
	I0520 11:39:51.377119   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 86/120
	I0520 11:39:52.379777   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 87/120
	I0520 11:39:53.381093   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 88/120
	I0520 11:39:54.383400   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 89/120
	I0520 11:39:55.385291   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 90/120
	I0520 11:39:56.387483   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 91/120
	I0520 11:39:57.388783   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 92/120
	I0520 11:39:58.390533   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 93/120
	I0520 11:39:59.391860   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 94/120
	I0520 11:40:00.393213   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 95/120
	I0520 11:40:01.394391   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 96/120
	I0520 11:40:02.395636   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 97/120
	I0520 11:40:03.397104   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 98/120
	I0520 11:40:04.399327   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 99/120
	I0520 11:40:05.400858   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 100/120
	I0520 11:40:06.402287   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 101/120
	I0520 11:40:07.403505   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 102/120
	I0520 11:40:08.404733   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 103/120
	I0520 11:40:09.406400   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 104/120
	I0520 11:40:10.408419   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 105/120
	I0520 11:40:11.409719   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 106/120
	I0520 11:40:12.411697   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 107/120
	I0520 11:40:13.413063   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 108/120
	I0520 11:40:14.415428   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 109/120
	I0520 11:40:15.417397   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 110/120
	I0520 11:40:16.418700   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 111/120
	I0520 11:40:17.420490   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 112/120
	I0520 11:40:18.421959   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 113/120
	I0520 11:40:19.423569   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 114/120
	I0520 11:40:20.425462   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 115/120
	I0520 11:40:21.426868   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 116/120
	I0520 11:40:22.428128   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 117/120
	I0520 11:40:23.429375   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 118/120
	I0520 11:40:24.431645   55834 main.go:141] libmachine: (no-preload-814599) Waiting for machine to stop 119/120
	I0520 11:40:25.433114   55834 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0520 11:40:25.433180   55834 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0520 11:40:25.434837   55834 out.go:177] 
	W0520 11:40:25.436037   55834 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0520 11:40:25.436072   55834 out.go:239] * 
	* 
	W0520 11:40:25.439000   55834 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:40:25.440365   55834 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-814599 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-814599 -n no-preload-814599
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-814599 -n no-preload-814599: exit status 3 (18.478183704s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:40:43.921213   57271 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host
	E0520 11:40:43.921231   57271 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-814599" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-274976 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-274976 --alsologtostderr -v=3: exit status 82 (2m0.472041034s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-274976"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:40:06.634182   57175 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:40:06.634442   57175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:40:06.634452   57175 out.go:304] Setting ErrFile to fd 2...
	I0520 11:40:06.634457   57175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:40:06.634639   57175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:40:06.634831   57175 out.go:298] Setting JSON to false
	I0520 11:40:06.634909   57175 mustload.go:65] Loading cluster: embed-certs-274976
	I0520 11:40:06.635231   57175 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:40:06.635294   57175 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/config.json ...
	I0520 11:40:06.635448   57175 mustload.go:65] Loading cluster: embed-certs-274976
	I0520 11:40:06.635544   57175 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:40:06.635566   57175 stop.go:39] StopHost: embed-certs-274976
	I0520 11:40:06.635906   57175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:40:06.635959   57175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:40:06.650862   57175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0520 11:40:06.651289   57175 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:40:06.651766   57175 main.go:141] libmachine: Using API Version  1
	I0520 11:40:06.651786   57175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:40:06.652157   57175 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:40:06.654553   57175 out.go:177] * Stopping node "embed-certs-274976"  ...
	I0520 11:40:06.655834   57175 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 11:40:06.655855   57175 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:40:06.656057   57175 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 11:40:06.656081   57175 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:40:06.658897   57175 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:40:06.659383   57175 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:40:06.659420   57175 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:40:06.659581   57175 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:40:06.659749   57175 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:40:06.659940   57175 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:40:06.660114   57175 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:40:06.755264   57175 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 11:40:06.822087   57175 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 11:40:06.868492   57175 main.go:141] libmachine: Stopping "embed-certs-274976"...
	I0520 11:40:06.868516   57175 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:40:06.869934   57175 main.go:141] libmachine: (embed-certs-274976) Calling .Stop
	I0520 11:40:06.873819   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 0/120
	I0520 11:40:07.875346   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 1/120
	I0520 11:40:08.876757   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 2/120
	I0520 11:40:09.878323   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 3/120
	I0520 11:40:10.879651   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 4/120
	I0520 11:40:11.881883   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 5/120
	I0520 11:40:12.883496   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 6/120
	I0520 11:40:13.884906   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 7/120
	I0520 11:40:14.886292   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 8/120
	I0520 11:40:15.887525   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 9/120
	I0520 11:40:16.888901   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 10/120
	I0520 11:40:17.890406   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 11/120
	I0520 11:40:18.891678   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 12/120
	I0520 11:40:19.893580   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 13/120
	I0520 11:40:20.894974   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 14/120
	I0520 11:40:21.896746   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 15/120
	I0520 11:40:22.898138   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 16/120
	I0520 11:40:23.899473   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 17/120
	I0520 11:40:24.900967   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 18/120
	I0520 11:40:25.902159   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 19/120
	I0520 11:40:26.904222   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 20/120
	I0520 11:40:27.905482   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 21/120
	I0520 11:40:28.906797   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 22/120
	I0520 11:40:29.908031   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 23/120
	I0520 11:40:30.909445   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 24/120
	I0520 11:40:31.911311   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 25/120
	I0520 11:40:32.912471   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 26/120
	I0520 11:40:33.913854   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 27/120
	I0520 11:40:34.915275   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 28/120
	I0520 11:40:35.917365   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 29/120
	I0520 11:40:36.919404   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 30/120
	I0520 11:40:37.920859   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 31/120
	I0520 11:40:38.922154   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 32/120
	I0520 11:40:39.923524   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 33/120
	I0520 11:40:40.925482   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 34/120
	I0520 11:40:41.927380   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 35/120
	I0520 11:40:42.928734   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 36/120
	I0520 11:40:43.930013   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 37/120
	I0520 11:40:44.931209   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 38/120
	I0520 11:40:45.932713   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 39/120
	I0520 11:40:46.934886   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 40/120
	I0520 11:40:47.936711   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 41/120
	I0520 11:40:48.937883   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 42/120
	I0520 11:40:49.939201   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 43/120
	I0520 11:40:50.940537   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 44/120
	I0520 11:40:51.941823   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 45/120
	I0520 11:40:52.943299   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 46/120
	I0520 11:40:53.944529   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 47/120
	I0520 11:40:54.945928   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 48/120
	I0520 11:40:55.947785   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 49/120
	I0520 11:40:56.949827   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 50/120
	I0520 11:40:57.951456   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 51/120
	I0520 11:40:58.952866   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 52/120
	I0520 11:40:59.954495   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 53/120
	I0520 11:41:00.956063   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 54/120
	I0520 11:41:01.958113   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 55/120
	I0520 11:41:02.959459   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 56/120
	I0520 11:41:03.961807   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 57/120
	I0520 11:41:04.963157   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 58/120
	I0520 11:41:05.964679   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 59/120
	I0520 11:41:06.966627   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 60/120
	I0520 11:41:07.968151   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 61/120
	I0520 11:41:08.969400   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 62/120
	I0520 11:41:09.970823   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 63/120
	I0520 11:41:10.972114   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 64/120
	I0520 11:41:11.974057   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 65/120
	I0520 11:41:12.975265   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 66/120
	I0520 11:41:13.976727   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 67/120
	I0520 11:41:14.978109   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 68/120
	I0520 11:41:15.979647   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 69/120
	I0520 11:41:16.981873   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 70/120
	I0520 11:41:17.983212   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 71/120
	I0520 11:41:18.984707   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 72/120
	I0520 11:41:19.986154   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 73/120
	I0520 11:41:20.988245   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 74/120
	I0520 11:41:21.990396   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 75/120
	I0520 11:41:22.991605   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 76/120
	I0520 11:41:23.993113   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 77/120
	I0520 11:41:24.994387   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 78/120
	I0520 11:41:25.995844   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 79/120
	I0520 11:41:26.997930   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 80/120
	I0520 11:41:27.999335   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 81/120
	I0520 11:41:29.000817   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 82/120
	I0520 11:41:30.002298   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 83/120
	I0520 11:41:31.003547   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 84/120
	I0520 11:41:32.005682   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 85/120
	I0520 11:41:33.007081   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 86/120
	I0520 11:41:34.008512   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 87/120
	I0520 11:41:35.009656   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 88/120
	I0520 11:41:36.011091   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 89/120
	I0520 11:41:37.013203   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 90/120
	I0520 11:41:38.014651   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 91/120
	I0520 11:41:39.015791   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 92/120
	I0520 11:41:40.017015   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 93/120
	I0520 11:41:41.018187   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 94/120
	I0520 11:41:42.020129   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 95/120
	I0520 11:41:43.021580   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 96/120
	I0520 11:41:44.022960   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 97/120
	I0520 11:41:45.024304   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 98/120
	I0520 11:41:46.025696   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 99/120
	I0520 11:41:47.027938   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 100/120
	I0520 11:41:48.029372   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 101/120
	I0520 11:41:49.030800   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 102/120
	I0520 11:41:50.032306   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 103/120
	I0520 11:41:51.033668   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 104/120
	I0520 11:41:52.035936   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 105/120
	I0520 11:41:53.037244   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 106/120
	I0520 11:41:54.038686   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 107/120
	I0520 11:41:55.040081   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 108/120
	I0520 11:41:56.041449   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 109/120
	I0520 11:41:57.043624   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 110/120
	I0520 11:41:58.045093   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 111/120
	I0520 11:41:59.046309   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 112/120
	I0520 11:42:00.047568   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 113/120
	I0520 11:42:01.048876   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 114/120
	I0520 11:42:02.050609   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 115/120
	I0520 11:42:03.051916   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 116/120
	I0520 11:42:04.053221   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 117/120
	I0520 11:42:05.054332   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 118/120
	I0520 11:42:06.055550   57175 main.go:141] libmachine: (embed-certs-274976) Waiting for machine to stop 119/120
	I0520 11:42:07.056200   57175 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0520 11:42:07.056244   57175 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0520 11:42:07.057885   57175 out.go:177] 
	W0520 11:42:07.059069   57175 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0520 11:42:07.059083   57175 out.go:239] * 
	* 
	W0520 11:42:07.061685   57175 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:42:07.062749   57175 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-274976 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274976 -n embed-certs-274976
E0520 11:42:19.191257   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274976 -n embed-certs-274976: exit status 3 (18.482537328s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:42:25.549233   58034 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.167:22: connect: no route to host
	E0520 11:42:25.549252   58034 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.167:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-274976" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-814599 -n no-preload-814599
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-814599 -n no-preload-814599: exit status 3 (3.163810836s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:40:47.085310   57391 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host
	E0520 11:40:47.085333   57391 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-814599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-814599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153064422s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-814599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-814599 -n no-preload-814599
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-814599 -n no-preload-814599: exit status 3 (3.062476872s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:40:56.301240   57473 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host
	E0520 11:40:56.301273   57473 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.185:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-814599" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-210420 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-210420 create -f testdata/busybox.yaml: exit status 1 (41.960566ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-210420" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-210420 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 6 (222.090379ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:41:04.580723   57618 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-210420" does not appear in /home/jenkins/minikube-integration/18925-3819/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-210420" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 6 (228.807125ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:41:04.808734   57648 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-210420" does not appear in /home/jenkins/minikube-integration/18925-3819/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-210420" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (83.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-210420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0520 11:41:17.359710   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-210420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m23.114772396s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-210420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-210420 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-210420 describe deploy/metrics-server -n kube-system: exit status 1 (42.137455ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-210420" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-210420 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 6 (222.300963ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:42:28.188071   58170 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-210420" does not appear in /home/jenkins/minikube-integration/18925-3819/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-210420" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (83.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-668445 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-668445 --alsologtostderr -v=3: exit status 82 (2m0.476863267s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-668445"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:41:30.469044   57883 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:41:30.469306   57883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:41:30.469316   57883 out.go:304] Setting ErrFile to fd 2...
	I0520 11:41:30.469320   57883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:41:30.469477   57883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:41:30.469690   57883 out.go:298] Setting JSON to false
	I0520 11:41:30.469766   57883 mustload.go:65] Loading cluster: default-k8s-diff-port-668445
	I0520 11:41:30.470061   57883 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:41:30.470123   57883 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:41:30.470280   57883 mustload.go:65] Loading cluster: default-k8s-diff-port-668445
	I0520 11:41:30.470375   57883 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:41:30.470396   57883 stop.go:39] StopHost: default-k8s-diff-port-668445
	I0520 11:41:30.470740   57883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:41:30.470789   57883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:41:30.485166   57883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45147
	I0520 11:41:30.485596   57883 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:41:30.486099   57883 main.go:141] libmachine: Using API Version  1
	I0520 11:41:30.486124   57883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:41:30.486416   57883 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:41:30.488847   57883 out.go:177] * Stopping node "default-k8s-diff-port-668445"  ...
	I0520 11:41:30.490301   57883 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0520 11:41:30.490334   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:41:30.490535   57883 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0520 11:41:30.490555   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:41:30.493497   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:41:30.493891   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:41:30.493918   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:41:30.494060   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:41:30.494218   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:41:30.494356   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:41:30.494506   57883 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:41:30.589587   57883 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0520 11:41:30.646280   57883 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0520 11:41:30.712500   57883 main.go:141] libmachine: Stopping "default-k8s-diff-port-668445"...
	I0520 11:41:30.712537   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:41:30.713993   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Stop
	I0520 11:41:30.717253   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 0/120
	I0520 11:41:31.719157   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 1/120
	I0520 11:41:32.720519   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 2/120
	I0520 11:41:33.721868   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 3/120
	I0520 11:41:34.723208   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 4/120
	I0520 11:41:35.725319   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 5/120
	I0520 11:41:36.726640   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 6/120
	I0520 11:41:37.728009   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 7/120
	I0520 11:41:38.729286   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 8/120
	I0520 11:41:39.730537   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 9/120
	I0520 11:41:40.731773   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 10/120
	I0520 11:41:41.733026   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 11/120
	I0520 11:41:42.734655   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 12/120
	I0520 11:41:43.736209   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 13/120
	I0520 11:41:44.737629   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 14/120
	I0520 11:41:45.739629   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 15/120
	I0520 11:41:46.740957   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 16/120
	I0520 11:41:47.742343   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 17/120
	I0520 11:41:48.743656   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 18/120
	I0520 11:41:49.745027   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 19/120
	I0520 11:41:50.747256   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 20/120
	I0520 11:41:51.748623   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 21/120
	I0520 11:41:52.750031   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 22/120
	I0520 11:41:53.751314   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 23/120
	I0520 11:41:54.752654   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 24/120
	I0520 11:41:55.754685   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 25/120
	I0520 11:41:56.756003   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 26/120
	I0520 11:41:57.757457   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 27/120
	I0520 11:41:58.758848   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 28/120
	I0520 11:41:59.760340   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 29/120
	I0520 11:42:00.762560   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 30/120
	I0520 11:42:01.763942   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 31/120
	I0520 11:42:02.765242   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 32/120
	I0520 11:42:03.766504   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 33/120
	I0520 11:42:04.767859   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 34/120
	I0520 11:42:05.769771   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 35/120
	I0520 11:42:06.771096   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 36/120
	I0520 11:42:07.772456   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 37/120
	I0520 11:42:08.773742   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 38/120
	I0520 11:42:09.775287   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 39/120
	I0520 11:42:10.776525   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 40/120
	I0520 11:42:11.777921   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 41/120
	I0520 11:42:12.779609   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 42/120
	I0520 11:42:13.781059   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 43/120
	I0520 11:42:14.782477   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 44/120
	I0520 11:42:15.784352   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 45/120
	I0520 11:42:16.785697   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 46/120
	I0520 11:42:17.787176   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 47/120
	I0520 11:42:18.788608   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 48/120
	I0520 11:42:19.789880   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 49/120
	I0520 11:42:20.792080   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 50/120
	I0520 11:42:21.793620   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 51/120
	I0520 11:42:22.795314   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 52/120
	I0520 11:42:23.796876   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 53/120
	I0520 11:42:24.798234   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 54/120
	I0520 11:42:25.800116   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 55/120
	I0520 11:42:26.801668   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 56/120
	I0520 11:42:27.803039   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 57/120
	I0520 11:42:28.804291   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 58/120
	I0520 11:42:29.805535   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 59/120
	I0520 11:42:30.807572   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 60/120
	I0520 11:42:31.808593   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 61/120
	I0520 11:42:32.810006   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 62/120
	I0520 11:42:33.811466   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 63/120
	I0520 11:42:34.812845   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 64/120
	I0520 11:42:35.814769   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 65/120
	I0520 11:42:36.816327   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 66/120
	I0520 11:42:37.817727   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 67/120
	I0520 11:42:38.819089   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 68/120
	I0520 11:42:39.820584   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 69/120
	I0520 11:42:40.822725   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 70/120
	I0520 11:42:41.824240   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 71/120
	I0520 11:42:42.825592   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 72/120
	I0520 11:42:43.826959   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 73/120
	I0520 11:42:44.828325   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 74/120
	I0520 11:42:45.830497   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 75/120
	I0520 11:42:46.831819   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 76/120
	I0520 11:42:47.833282   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 77/120
	I0520 11:42:48.834561   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 78/120
	I0520 11:42:49.835950   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 79/120
	I0520 11:42:50.838046   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 80/120
	I0520 11:42:51.839430   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 81/120
	I0520 11:42:52.840766   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 82/120
	I0520 11:42:53.842192   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 83/120
	I0520 11:42:54.843607   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 84/120
	I0520 11:42:55.846129   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 85/120
	I0520 11:42:56.847439   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 86/120
	I0520 11:42:57.848756   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 87/120
	I0520 11:42:58.850055   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 88/120
	I0520 11:42:59.851400   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 89/120
	I0520 11:43:00.853569   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 90/120
	I0520 11:43:01.854863   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 91/120
	I0520 11:43:02.856171   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 92/120
	I0520 11:43:03.857587   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 93/120
	I0520 11:43:04.858958   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 94/120
	I0520 11:43:05.860829   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 95/120
	I0520 11:43:06.862165   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 96/120
	I0520 11:43:07.863432   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 97/120
	I0520 11:43:08.865030   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 98/120
	I0520 11:43:09.866455   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 99/120
	I0520 11:43:10.868750   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 100/120
	I0520 11:43:11.870116   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 101/120
	I0520 11:43:12.871521   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 102/120
	I0520 11:43:13.872892   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 103/120
	I0520 11:43:14.874365   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 104/120
	I0520 11:43:15.876546   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 105/120
	I0520 11:43:16.877961   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 106/120
	I0520 11:43:17.879425   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 107/120
	I0520 11:43:18.880762   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 108/120
	I0520 11:43:19.882237   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 109/120
	I0520 11:43:20.884456   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 110/120
	I0520 11:43:21.885777   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 111/120
	I0520 11:43:22.887033   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 112/120
	I0520 11:43:23.888366   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 113/120
	I0520 11:43:24.889632   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 114/120
	I0520 11:43:25.891477   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 115/120
	I0520 11:43:26.892780   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 116/120
	I0520 11:43:27.894167   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 117/120
	I0520 11:43:28.895622   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 118/120
	I0520 11:43:29.896922   57883 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for machine to stop 119/120
	I0520 11:43:30.898287   57883 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0520 11:43:30.898347   57883 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0520 11:43:30.900515   57883 out.go:177] 
	W0520 11:43:30.901841   57883 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0520 11:43:30.901858   57883 out.go:239] * 
	* 
	W0520 11:43:30.904282   57883 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:43:30.905870   57883 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-668445 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445: exit status 3 (18.609753783s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:43:49.517294   58636 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.221:22: connect: no route to host
	E0520 11:43:49.517321   58636 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.221:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-668445" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274976 -n embed-certs-274976
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274976 -n embed-certs-274976: exit status 3 (3.167506259s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:42:28.717265   58113 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.167:22: connect: no route to host
	E0520 11:42:28.717284   58113 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.167:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-274976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-274976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152900677s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.167:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-274976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274976 -n embed-certs-274976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274976 -n embed-certs-274976: exit status 3 (3.063036341s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:42:37.933274   58351 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.167:22: connect: no route to host
	E0520 11:42:37.933293   58351 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.167:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-274976" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (699.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-210420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-210420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m36.303607655s)

                                                
                                                
-- stdout --
	* [old-k8s-version-210420] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-210420" primary control-plane node in "old-k8s-version-210420" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-210420" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:42:31.696446   58316 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:42:31.696670   58316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:42:31.696677   58316 out.go:304] Setting ErrFile to fd 2...
	I0520 11:42:31.696681   58316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:42:31.696887   58316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:42:31.697403   58316 out.go:298] Setting JSON to false
	I0520 11:42:31.698327   58316 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5095,"bootTime":1716200257,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:42:31.698380   58316 start.go:139] virtualization: kvm guest
	I0520 11:42:31.700463   58316 out.go:177] * [old-k8s-version-210420] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:42:31.701667   58316 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:42:31.702880   58316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:42:31.701686   58316 notify.go:220] Checking for updates...
	I0520 11:42:31.705157   58316 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:42:31.706285   58316 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:42:31.707450   58316 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:42:31.708613   58316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:42:31.709999   58316 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:42:31.710372   58316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:42:31.710431   58316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:42:31.725226   58316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43027
	I0520 11:42:31.725605   58316 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:42:31.726067   58316 main.go:141] libmachine: Using API Version  1
	I0520 11:42:31.726088   58316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:42:31.726394   58316 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:42:31.726548   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:42:31.728376   58316 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 11:42:31.729646   58316 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:42:31.729942   58316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:42:31.729982   58316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:42:31.744257   58316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0520 11:42:31.744598   58316 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:42:31.745054   58316 main.go:141] libmachine: Using API Version  1
	I0520 11:42:31.745076   58316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:42:31.745353   58316 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:42:31.745510   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:42:31.779553   58316 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 11:42:31.780948   58316 start.go:297] selected driver: kvm2
	I0520 11:42:31.780970   58316 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:42:31.781097   58316 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:42:31.782022   58316 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:42:31.782100   58316 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:42:31.796659   58316 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:42:31.797023   58316 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:42:31.797050   58316 cni.go:84] Creating CNI manager for ""
	I0520 11:42:31.797058   58316 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:42:31.797087   58316 start.go:340] cluster config:
	{Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:42:31.797206   58316 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:42:31.799081   58316 out.go:177] * Starting "old-k8s-version-210420" primary control-plane node in "old-k8s-version-210420" cluster
	I0520 11:42:31.800122   58316 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:42:31.800150   58316 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0520 11:42:31.800160   58316 cache.go:56] Caching tarball of preloaded images
	I0520 11:42:31.800256   58316 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:42:31.800271   58316 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0520 11:42:31.800359   58316 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:42:31.800527   58316 start.go:360] acquireMachinesLock for old-k8s-version-210420: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:45:41.129769   58316 start.go:364] duration metric: took 3m9.329208681s to acquireMachinesLock for "old-k8s-version-210420"
	I0520 11:45:41.129841   58316 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:45:41.129847   58316 fix.go:54] fixHost starting: 
	I0520 11:45:41.130178   58316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:45:41.130209   58316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:45:41.145527   58316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0520 11:45:41.145974   58316 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:45:41.146470   58316 main.go:141] libmachine: Using API Version  1
	I0520 11:45:41.146491   58316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:45:41.146803   58316 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:45:41.146962   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:41.147096   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetState
	I0520 11:45:41.148653   58316 fix.go:112] recreateIfNeeded on old-k8s-version-210420: state=Stopped err=<nil>
	I0520 11:45:41.148678   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	W0520 11:45:41.148835   58316 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:45:41.150515   58316 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210420" ...
	I0520 11:45:41.151697   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .Start
	I0520 11:45:41.151882   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring networks are active...
	I0520 11:45:41.152702   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network default is active
	I0520 11:45:41.153047   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network mk-old-k8s-version-210420 is active
	I0520 11:45:41.153452   58316 main.go:141] libmachine: (old-k8s-version-210420) Getting domain xml...
	I0520 11:45:41.154219   58316 main.go:141] libmachine: (old-k8s-version-210420) Creating domain...
	I0520 11:45:42.344294   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting to get IP...
	I0520 11:45:42.345102   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.345496   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.345567   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.345490   59217 retry.go:31] will retry after 272.13676ms: waiting for machine to come up
	I0520 11:45:42.619253   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.619676   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.619706   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.619633   59217 retry.go:31] will retry after 374.178023ms: waiting for machine to come up
	I0520 11:45:42.995376   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.995793   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.995819   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.995748   59217 retry.go:31] will retry after 462.060487ms: waiting for machine to come up
	I0520 11:45:43.459411   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.459824   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.459860   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.459791   59217 retry.go:31] will retry after 479.382552ms: waiting for machine to come up
	I0520 11:45:43.940363   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.940850   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.940873   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.940802   59217 retry.go:31] will retry after 633.680965ms: waiting for machine to come up
	I0520 11:45:44.575531   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:44.575957   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:44.575985   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:44.575915   59217 retry.go:31] will retry after 755.049677ms: waiting for machine to come up
	I0520 11:45:45.332864   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:45.333335   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:45.333368   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:45.333292   59217 retry.go:31] will retry after 757.292563ms: waiting for machine to come up
	I0520 11:45:46.092128   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:46.092583   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:46.092605   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:46.092544   59217 retry.go:31] will retry after 1.150182695s: waiting for machine to come up
	I0520 11:45:47.244071   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:47.244526   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:47.244588   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:47.244474   59217 retry.go:31] will retry after 1.316447926s: waiting for machine to come up
	I0520 11:45:48.562914   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:48.563296   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:48.563326   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:48.563240   59217 retry.go:31] will retry after 1.615920757s: waiting for machine to come up
	I0520 11:45:50.180999   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:50.181361   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:50.181395   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:50.181336   59217 retry.go:31] will retry after 2.441112889s: waiting for machine to come up
	I0520 11:45:52.624374   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:52.624739   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:52.624771   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:52.624722   59217 retry.go:31] will retry after 2.505225423s: waiting for machine to come up
	I0520 11:45:55.133339   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:55.133626   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:55.133655   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:55.133596   59217 retry.go:31] will retry after 4.197262735s: waiting for machine to come up
	I0520 11:45:59.334216   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334678   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has current primary IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334700   58316 main.go:141] libmachine: (old-k8s-version-210420) Found IP for machine: 192.168.83.100
	I0520 11:45:59.334712   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserving static IP address...
	I0520 11:45:59.335134   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.335166   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | skip adding static IP to network mk-old-k8s-version-210420 - found existing host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"}
	I0520 11:45:59.335180   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserved static IP address: 192.168.83.100
	I0520 11:45:59.335197   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting for SSH to be available...
	I0520 11:45:59.335212   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Getting to WaitForSSH function...
	I0520 11:45:59.337206   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337476   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.337501   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337634   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH client type: external
	I0520 11:45:59.337681   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa (-rw-------)
	I0520 11:45:59.337716   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:45:59.337732   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | About to run SSH command:
	I0520 11:45:59.337743   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | exit 0
	I0520 11:45:59.460960   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | SSH cmd err, output: <nil>: 
	I0520 11:45:59.461347   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:45:59.461978   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.464416   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.464734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.464756   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.465018   58316 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:45:59.465183   58316 machine.go:94] provisionDockerMachine start ...
	I0520 11:45:59.465201   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:59.465416   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.467457   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467751   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.467782   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467892   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.468050   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468204   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468330   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.468474   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.468655   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.468665   58316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:45:59.573237   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:45:59.573264   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573558   58316 buildroot.go:166] provisioning hostname "old-k8s-version-210420"
	I0520 11:45:59.573586   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.576245   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.576770   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.576778   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576957   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577122   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577267   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.577413   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.577620   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.577639   58316 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210420 && echo "old-k8s-version-210420" | sudo tee /etc/hostname
	I0520 11:45:59.695529   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210420
	
	I0520 11:45:59.695573   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.698545   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698808   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.698835   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698964   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.699144   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699331   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699427   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.699594   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.699744   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.699767   58316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:45:59.814736   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:59.814765   58316 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:45:59.814820   58316 buildroot.go:174] setting up certificates
	I0520 11:45:59.814832   58316 provision.go:84] configureAuth start
	I0520 11:45:59.814841   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.815158   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.817591   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.817917   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.817946   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.818103   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.820205   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820474   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.820499   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820625   58316 provision.go:143] copyHostCerts
	I0520 11:45:59.820681   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:45:59.820698   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:45:59.820782   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:45:59.820887   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:45:59.820896   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:45:59.820921   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:45:59.821006   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:45:59.821015   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:45:59.821038   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:45:59.821092   58316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210420 san=[127.0.0.1 192.168.83.100 localhost minikube old-k8s-version-210420]
	I0520 11:46:00.093158   58316 provision.go:177] copyRemoteCerts
	I0520 11:46:00.093218   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:00.093251   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.095725   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096033   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.096055   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096180   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.096371   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.096549   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.096693   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.179313   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:00.203235   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 11:46:00.226670   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:00.250832   58316 provision.go:87] duration metric: took 435.987724ms to configureAuth
	I0520 11:46:00.250868   58316 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:00.251054   58316 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:46:00.251183   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.253679   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.253986   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.254020   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.254118   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.254305   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254489   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254626   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.254777   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.255000   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.255023   58316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:00.512174   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:00.512207   58316 machine.go:97] duration metric: took 1.047011014s to provisionDockerMachine
	I0520 11:46:00.512222   58316 start.go:293] postStartSetup for "old-k8s-version-210420" (driver="kvm2")
	I0520 11:46:00.512236   58316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:00.512259   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.512546   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:00.512577   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.515025   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515358   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.515386   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515512   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.515696   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.515879   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.516004   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.599956   58316 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:00.604374   58316 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:00.604399   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:00.604486   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:00.604575   58316 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:00.604704   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:00.614371   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:00.638093   58316 start.go:296] duration metric: took 125.858293ms for postStartSetup
	I0520 11:46:00.638127   58316 fix.go:56] duration metric: took 19.508279908s for fixHost
	I0520 11:46:00.638151   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.640770   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641148   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.641192   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641352   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.641539   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641702   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641844   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.641985   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.642192   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.642207   58316 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0520 11:46:00.745686   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205560.721975623
	
	I0520 11:46:00.745708   58316 fix.go:216] guest clock: 1716205560.721975623
	I0520 11:46:00.745716   58316 fix.go:229] Guest: 2024-05-20 11:46:00.721975623 +0000 UTC Remote: 2024-05-20 11:46:00.638130573 +0000 UTC m=+208.973304879 (delta=83.84505ms)
	I0520 11:46:00.745736   58316 fix.go:200] guest clock delta is within tolerance: 83.84505ms
	I0520 11:46:00.745740   58316 start.go:83] releasing machines lock for "old-k8s-version-210420", held for 19.615924008s
	I0520 11:46:00.745762   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.746032   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:00.748741   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749123   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.749146   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749288   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749932   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.750021   58316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:00.750070   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.750200   58316 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:00.750225   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.752648   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.752923   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753019   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753045   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753156   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753244   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753273   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753343   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753420   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753519   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753582   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753650   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.753744   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753879   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	W0520 11:46:00.839633   58316 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:00.839741   58316 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:00.861801   58316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:01.011737   58316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:01.018898   58316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:01.018964   58316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:01.035600   58316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:01.035622   58316 start.go:494] detecting cgroup driver to use...
	I0520 11:46:01.035698   58316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:01.051692   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:01.065994   58316 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:01.066083   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:01.079249   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:01.093862   58316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:01.206394   58316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:01.362764   58316 docker.go:233] disabling docker service ...
	I0520 11:46:01.362842   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:01.378012   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:01.393231   58316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:01.535932   58316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:01.677217   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:01.691602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:01.716408   58316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 11:46:01.716457   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.729460   58316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:01.729541   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.740455   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.751846   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.763174   58316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:01.775353   58316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:01.786021   58316 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:01.786088   58316 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:01.801529   58316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:01.812523   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:01.927496   58316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:02.084059   58316 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:02.084137   58316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:02.089878   58316 start.go:562] Will wait 60s for crictl version
	I0520 11:46:02.089925   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:02.094050   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:02.134859   58316 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:02.134936   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.166705   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.196374   58316 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 11:46:02.197655   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:02.200774   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201186   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:02.201211   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201385   58316 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:02.205589   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:02.217983   58316 kubeadm.go:877] updating cluster {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:02.218091   58316 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:46:02.218135   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:02.265907   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:02.265970   58316 ssh_runner.go:195] Run: which lz4
	I0520 11:46:02.271466   58316 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0520 11:46:02.276993   58316 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:02.277017   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 11:46:04.082571   58316 crio.go:462] duration metric: took 1.811145307s to copy over tarball
	I0520 11:46:04.082648   58316 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:06.952510   58316 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.869830998s)
	I0520 11:46:06.952533   58316 crio.go:469] duration metric: took 2.869936445s to extract the tarball
	I0520 11:46:06.952541   58316 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:06.996387   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:07.032067   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:07.032093   58316 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:46:07.032230   58316 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.032216   58316 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 11:46:07.032350   58316 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.032411   58316 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.032692   58316 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.033198   58316 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034239   58316 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.034327   58316 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 11:46:07.034404   58316 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.034442   58316 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.207835   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.209772   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 11:46:07.278794   58316 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 11:46:07.278843   58316 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.278895   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.284049   58316 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 11:46:07.284098   58316 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 11:46:07.284149   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.286345   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.289361   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 11:46:07.331714   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 11:46:07.337994   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 11:46:07.373533   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.380715   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.385188   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.390552   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.410661   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.481535   58316 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 11:46:07.481576   58316 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.481626   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494357   58316 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 11:46:07.494374   58316 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 11:46:07.494402   58316 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.494404   58316 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.507981   58316 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 11:46:07.508023   58316 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.508069   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.516898   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.517006   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.517019   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.517060   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.517248   58316 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 11:46:07.517282   58316 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.517309   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.581093   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.581200   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 11:46:07.622963   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 11:46:07.623008   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 11:46:07.623069   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 11:46:07.632151   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 11:46:07.915373   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:08.058938   58316 cache_images.go:92] duration metric: took 1.026816798s to LoadCachedImages
	W0520 11:46:08.059038   58316 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0520 11:46:08.059057   58316 kubeadm.go:928] updating node { 192.168.83.100 8443 v1.20.0 crio true true} ...
	I0520 11:46:08.059208   58316 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:08.059295   58316 ssh_runner.go:195] Run: crio config
	I0520 11:46:08.131486   58316 cni.go:84] Creating CNI manager for ""
	I0520 11:46:08.131511   58316 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:08.131532   58316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:08.131561   58316 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.100 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210420 NodeName:old-k8s-version-210420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:46:08.131724   58316 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210420"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:08.131800   58316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:46:08.143109   58316 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:08.143202   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:08.153988   58316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0520 11:46:08.172321   58316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:08.190157   58316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0520 11:46:08.208084   58316 ssh_runner.go:195] Run: grep 192.168.83.100	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:08.212211   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:08.225134   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:08.350083   58316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:08.368185   58316 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420 for IP: 192.168.83.100
	I0520 11:46:08.368202   58316 certs.go:194] generating shared ca certs ...
	I0520 11:46:08.368216   58316 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.368372   58316 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:08.368422   58316 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:08.368435   58316 certs.go:256] generating profile certs ...
	I0520 11:46:08.368552   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key
	I0520 11:46:08.368620   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9
	I0520 11:46:08.368664   58316 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key
	I0520 11:46:08.368806   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:08.368839   58316 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:08.368852   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:08.368884   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:08.368914   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:08.368967   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:08.369020   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:08.369674   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:08.402860   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:08.432963   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:08.458740   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:08.490622   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 11:46:08.529627   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:08.563064   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:08.604332   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 11:46:08.632006   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:08.656708   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:08.680552   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:08.704942   58316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:08.723543   58316 ssh_runner.go:195] Run: openssl version
	I0520 11:46:08.729776   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:08.742608   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747374   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747417   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.753504   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:08.765976   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:08.777365   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781823   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.787550   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:08.798382   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:08.810174   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814810   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.820897   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:08.832012   58316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:08.836507   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:08.842727   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:08.849208   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:08.855730   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:08.862050   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:08.867819   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:08.873633   58316 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:08.873750   58316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:08.873800   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.912922   58316 cri.go:89] found id: ""
	I0520 11:46:08.913010   58316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:08.924046   58316 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:08.924084   58316 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:08.924092   58316 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:08.924154   58316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:08.934812   58316 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:08.936218   58316 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210420" does not appear in /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:08.937232   58316 kubeconfig.go:62] /home/jenkins/minikube-integration/18925-3819/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210420" cluster setting kubeconfig missing "old-k8s-version-210420" context setting]
	I0520 11:46:08.938681   58316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.947598   58316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:08.958245   58316 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.100
	I0520 11:46:08.958280   58316 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:08.958293   58316 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:08.958349   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.996333   58316 cri.go:89] found id: ""
	I0520 11:46:08.996413   58316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:09.015359   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:09.026711   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:09.026733   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:09.026784   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:09.036886   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:09.036954   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:09.046584   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:09.056035   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:09.056095   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:09.067501   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.077064   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:09.077123   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.086820   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:09.096267   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:09.096346   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:09.106407   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:09.116143   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.240367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.968603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.251776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.363870   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.466177   58316 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:10.466284   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:10.967012   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:11.467113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:11.966365   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.466509   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.966500   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.467280   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.966838   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.466319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.966768   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.466848   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.966953   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:16.466890   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:16.967036   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.466679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.966631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.466439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.966783   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.967319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.466315   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.966513   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:21.467069   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:21.966776   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.466411   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.967037   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.467218   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.966606   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.466990   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.966805   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.966829   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:26.466968   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:26.966832   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.466585   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.967225   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.466286   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.966679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.466425   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.966440   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.467175   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.966457   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.466596   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.467311   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.966392   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.967113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.467393   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.966336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.466951   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.967302   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:36.467339   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:36.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.467251   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.967153   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.967236   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.466627   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.966560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.466358   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.967165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:41.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:41.967129   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.467212   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.966637   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.466403   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.966371   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.466743   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.967293   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.467205   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:46.466728   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:46.967320   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.466459   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.966641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.467404   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.966845   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.966601   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.466616   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.967389   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:51.466668   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:51.967150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.466994   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.967190   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.467334   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.967039   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.466506   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.966828   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.967263   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:56.466949   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:56.966878   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.966485   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.466861   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.967023   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.467329   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.966889   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.466799   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.966521   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:01.466922   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:01.966435   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.467352   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.967335   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.467049   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.466715   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.966960   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.466641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.966993   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:06.466684   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:06.966995   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.467261   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.967171   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.467104   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.967112   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.466446   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.967351   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:10.467225   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:10.467292   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:10.507451   58316 cri.go:89] found id: ""
	I0520 11:47:10.507477   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.507487   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:10.507494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:10.507557   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:10.548291   58316 cri.go:89] found id: ""
	I0520 11:47:10.548317   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.548329   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:10.548337   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:10.548401   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:10.592870   58316 cri.go:89] found id: ""
	I0520 11:47:10.592899   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.592909   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:10.592916   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:10.592997   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:10.637344   58316 cri.go:89] found id: ""
	I0520 11:47:10.637376   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.637387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:10.637395   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:10.637466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:10.679759   58316 cri.go:89] found id: ""
	I0520 11:47:10.679794   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.679802   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:10.679807   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:10.679868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:10.720179   58316 cri.go:89] found id: ""
	I0520 11:47:10.720215   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.720250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:10.720259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:10.720324   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:10.761279   58316 cri.go:89] found id: ""
	I0520 11:47:10.761308   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.761320   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:10.761327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:10.761383   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:10.799008   58316 cri.go:89] found id: ""
	I0520 11:47:10.799037   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.799048   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:10.799059   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:10.799072   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:10.850507   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:10.850537   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:10.867287   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:10.867317   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:11.007373   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:11.007401   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:11.007420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:11.090037   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:11.090074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:13.633150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:13.650425   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:13.650494   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:13.690842   58316 cri.go:89] found id: ""
	I0520 11:47:13.690869   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.690880   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:13.690887   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:13.690947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:13.727462   58316 cri.go:89] found id: ""
	I0520 11:47:13.727492   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.727504   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:13.727511   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:13.727570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:13.765767   58316 cri.go:89] found id: ""
	I0520 11:47:13.765797   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.765807   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:13.765816   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:13.765877   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:13.800210   58316 cri.go:89] found id: ""
	I0520 11:47:13.800239   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.800248   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:13.800255   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:13.800316   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:13.836481   58316 cri.go:89] found id: ""
	I0520 11:47:13.836513   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.836525   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:13.836533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:13.836601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:13.881888   58316 cri.go:89] found id: ""
	I0520 11:47:13.881914   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.881923   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:13.881932   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:13.881990   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:13.916896   58316 cri.go:89] found id: ""
	I0520 11:47:13.916955   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.916966   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:13.916973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:13.917037   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:13.950345   58316 cri.go:89] found id: ""
	I0520 11:47:13.950372   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.950381   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:13.950391   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:13.950415   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:14.039076   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:14.039114   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:14.086767   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:14.086794   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:14.140377   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:14.140418   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:14.158036   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:14.158074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:14.255195   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:16.755906   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:16.772659   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:16.772736   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:16.821927   58316 cri.go:89] found id: ""
	I0520 11:47:16.821954   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.821964   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:16.821971   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:16.822033   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:16.863636   58316 cri.go:89] found id: ""
	I0520 11:47:16.863663   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.863673   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:16.863681   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:16.863749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:16.900524   58316 cri.go:89] found id: ""
	I0520 11:47:16.900553   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.900561   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:16.900566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:16.900625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:16.949484   58316 cri.go:89] found id: ""
	I0520 11:47:16.949517   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.949527   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:16.949535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:16.949597   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:16.994006   58316 cri.go:89] found id: ""
	I0520 11:47:16.994035   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.994045   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:16.994054   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:16.994111   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:17.030100   58316 cri.go:89] found id: ""
	I0520 11:47:17.030130   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.030141   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:17.030147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:17.030195   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:17.069067   58316 cri.go:89] found id: ""
	I0520 11:47:17.069089   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.069097   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:17.069102   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:17.069146   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:17.105102   58316 cri.go:89] found id: ""
	I0520 11:47:17.105131   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.105141   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:17.105151   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:17.105165   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:17.159912   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:17.159937   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:17.212192   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:17.212227   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:17.227761   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:17.227802   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:17.305724   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:17.305753   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:17.305770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:19.886189   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:19.900397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:19.900475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:19.940473   58316 cri.go:89] found id: ""
	I0520 11:47:19.940502   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.940509   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:19.940533   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:19.940582   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:19.985561   58316 cri.go:89] found id: ""
	I0520 11:47:19.985594   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.985605   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:19.985611   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:19.985676   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:20.040865   58316 cri.go:89] found id: ""
	I0520 11:47:20.040895   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.040906   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:20.040913   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:20.040991   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:20.086122   58316 cri.go:89] found id: ""
	I0520 11:47:20.086149   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.086159   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:20.086167   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:20.086228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:20.122905   58316 cri.go:89] found id: ""
	I0520 11:47:20.122931   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.122938   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:20.122943   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:20.123055   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:20.165966   58316 cri.go:89] found id: ""
	I0520 11:47:20.165993   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.166004   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:20.166012   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:20.166064   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:20.206010   58316 cri.go:89] found id: ""
	I0520 11:47:20.206043   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.206054   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:20.206062   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:20.206126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:20.243540   58316 cri.go:89] found id: ""
	I0520 11:47:20.243565   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.243573   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:20.243582   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:20.243595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:20.327278   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:20.327298   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:20.327313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:20.423064   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:20.423096   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:20.467707   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:20.467742   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:20.522746   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:20.522782   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:23.037165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.051438   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:23.051490   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:23.088156   58316 cri.go:89] found id: ""
	I0520 11:47:23.088186   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.088197   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:23.088204   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:23.088266   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:23.123357   58316 cri.go:89] found id: ""
	I0520 11:47:23.123384   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.123394   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:23.123401   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:23.123461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:23.159579   58316 cri.go:89] found id: ""
	I0520 11:47:23.159612   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.159622   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:23.159630   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:23.159694   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:23.195471   58316 cri.go:89] found id: ""
	I0520 11:47:23.195507   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.195518   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:23.195525   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:23.195584   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:23.231121   58316 cri.go:89] found id: ""
	I0520 11:47:23.231148   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.231159   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:23.231165   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:23.231225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:23.267809   58316 cri.go:89] found id: ""
	I0520 11:47:23.267833   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.267841   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:23.267847   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:23.267897   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:23.306589   58316 cri.go:89] found id: ""
	I0520 11:47:23.306608   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.306615   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:23.306621   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:23.306678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:23.339285   58316 cri.go:89] found id: ""
	I0520 11:47:23.339308   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.339317   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:23.339329   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:23.339343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:23.406157   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:23.406196   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:23.420874   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:23.420911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:23.503049   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:23.503077   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:23.503092   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:23.585189   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:23.585222   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:26.130594   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:26.146278   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:26.146366   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:26.189721   58316 cri.go:89] found id: ""
	I0520 11:47:26.189747   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.189754   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:26.189760   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:26.189809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:26.228060   58316 cri.go:89] found id: ""
	I0520 11:47:26.228084   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.228098   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:26.228104   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:26.228171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:26.262360   58316 cri.go:89] found id: ""
	I0520 11:47:26.262390   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.262400   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:26.262407   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:26.262471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:26.297981   58316 cri.go:89] found id: ""
	I0520 11:47:26.298010   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.298022   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:26.298029   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:26.298094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:26.339873   58316 cri.go:89] found id: ""
	I0520 11:47:26.339901   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.339911   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:26.339918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:26.339983   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:26.376366   58316 cri.go:89] found id: ""
	I0520 11:47:26.376393   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.376404   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:26.376411   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:26.376471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:26.410803   58316 cri.go:89] found id: ""
	I0520 11:47:26.410839   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.410851   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:26.410859   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:26.410921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:26.445392   58316 cri.go:89] found id: ""
	I0520 11:47:26.445420   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.445432   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:26.445443   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:26.445458   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:26.499244   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:26.499276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:26.513224   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:26.513259   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:26.589623   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:26.589647   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:26.589661   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:26.676875   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:26.676918   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:29.250626   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:29.265658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:29.265716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:29.307618   58316 cri.go:89] found id: ""
	I0520 11:47:29.307640   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.307651   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:29.307658   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:29.307717   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:29.351325   58316 cri.go:89] found id: ""
	I0520 11:47:29.351353   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.351364   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:29.351372   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:29.351444   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:29.392415   58316 cri.go:89] found id: ""
	I0520 11:47:29.392439   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.392449   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:29.392455   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:29.392501   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:29.435195   58316 cri.go:89] found id: ""
	I0520 11:47:29.435220   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.435227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:29.435233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:29.435287   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:29.476746   58316 cri.go:89] found id: ""
	I0520 11:47:29.476773   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.476784   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:29.476790   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:29.476875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:29.511513   58316 cri.go:89] found id: ""
	I0520 11:47:29.511545   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.511553   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:29.511559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:29.511621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:29.547563   58316 cri.go:89] found id: ""
	I0520 11:47:29.547583   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.547590   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:29.547598   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:29.547644   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:29.586438   58316 cri.go:89] found id: ""
	I0520 11:47:29.586464   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.586472   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:29.586480   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:29.586491   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:29.660795   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:29.660828   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:29.699538   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:29.699568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:29.751675   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:29.751711   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:29.766924   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:29.766960   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:29.840738   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:32.341347   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:32.355727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:32.355784   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:32.399703   58316 cri.go:89] found id: ""
	I0520 11:47:32.399740   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.399749   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:32.399754   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:32.399813   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:32.440321   58316 cri.go:89] found id: ""
	I0520 11:47:32.440350   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.440362   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:32.440367   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:32.440427   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:32.477813   58316 cri.go:89] found id: ""
	I0520 11:47:32.477840   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.477848   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:32.477853   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:32.477899   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:32.524465   58316 cri.go:89] found id: ""
	I0520 11:47:32.524492   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.524507   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:32.524516   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:32.524574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:32.559287   58316 cri.go:89] found id: ""
	I0520 11:47:32.559316   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.559326   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:32.559334   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:32.559390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:32.595977   58316 cri.go:89] found id: ""
	I0520 11:47:32.596010   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.596022   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:32.596030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:32.596088   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:32.631700   58316 cri.go:89] found id: ""
	I0520 11:47:32.631730   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.631739   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:32.631746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:32.631818   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:32.668027   58316 cri.go:89] found id: ""
	I0520 11:47:32.668062   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.668074   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:32.668084   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:32.668097   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:32.719910   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:32.719940   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:32.733988   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:32.734011   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:32.805852   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:32.805871   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:32.805886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:32.884785   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:32.884814   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:35.425234   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:35.438512   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:35.438583   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:35.471964   58316 cri.go:89] found id: ""
	I0520 11:47:35.471989   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.471997   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:35.472003   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:35.472058   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:35.505663   58316 cri.go:89] found id: ""
	I0520 11:47:35.505695   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.505706   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:35.505714   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:35.505766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:35.540407   58316 cri.go:89] found id: ""
	I0520 11:47:35.540437   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.540447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:35.540454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:35.540516   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:35.576599   58316 cri.go:89] found id: ""
	I0520 11:47:35.576622   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.576629   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:35.576634   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:35.576689   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:35.610257   58316 cri.go:89] found id: ""
	I0520 11:47:35.610286   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.610297   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:35.610304   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:35.610367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:35.647305   58316 cri.go:89] found id: ""
	I0520 11:47:35.647330   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.647338   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:35.647344   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:35.647393   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:35.682149   58316 cri.go:89] found id: ""
	I0520 11:47:35.682180   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.682191   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:35.682198   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:35.682260   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:35.717536   58316 cri.go:89] found id: ""
	I0520 11:47:35.717561   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.717569   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:35.717576   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:35.717588   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:35.768734   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:35.768763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:35.783246   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:35.783276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:35.869299   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:35.869325   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:35.869343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:35.948083   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:35.948116   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:38.498413   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:38.512549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:38.512627   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:38.545580   58316 cri.go:89] found id: ""
	I0520 11:47:38.545601   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.545608   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:38.545614   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:38.545660   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.583115   58316 cri.go:89] found id: ""
	I0520 11:47:38.583145   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.583156   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:38.583163   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:38.583210   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:38.619548   58316 cri.go:89] found id: ""
	I0520 11:47:38.619578   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.619588   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:38.619596   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:38.619664   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:38.664240   58316 cri.go:89] found id: ""
	I0520 11:47:38.664272   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.664282   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:38.664290   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:38.664347   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:38.703438   58316 cri.go:89] found id: ""
	I0520 11:47:38.703460   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.703468   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:38.703472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:38.703546   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:38.740712   58316 cri.go:89] found id: ""
	I0520 11:47:38.740742   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.740754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:38.740761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:38.740821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:38.785274   58316 cri.go:89] found id: ""
	I0520 11:47:38.785296   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.785305   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:38.785312   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:38.785370   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:38.827757   58316 cri.go:89] found id: ""
	I0520 11:47:38.827787   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.827798   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:38.827810   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:38.827825   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:38.880074   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:38.880105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:38.894643   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:38.894688   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:38.982821   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:38.982849   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:38.982872   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:39.066426   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:39.066457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:41.610439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:41.623306   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:41.623374   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:41.662703   58316 cri.go:89] found id: ""
	I0520 11:47:41.662731   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.662740   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:41.662747   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:41.662803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:41.699406   58316 cri.go:89] found id: ""
	I0520 11:47:41.699433   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.699443   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:41.699449   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:41.699493   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:41.733448   58316 cri.go:89] found id: ""
	I0520 11:47:41.733475   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.733485   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:41.733492   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:41.733556   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:41.782101   58316 cri.go:89] found id: ""
	I0520 11:47:41.782131   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.782142   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:41.782149   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:41.782213   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:41.828303   58316 cri.go:89] found id: ""
	I0520 11:47:41.828332   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.828343   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:41.828350   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:41.828438   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:41.865690   58316 cri.go:89] found id: ""
	I0520 11:47:41.865712   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.865721   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:41.865726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:41.865773   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:41.907954   58316 cri.go:89] found id: ""
	I0520 11:47:41.907982   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.907992   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:41.907999   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:41.908060   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:41.946853   58316 cri.go:89] found id: ""
	I0520 11:47:41.946879   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.946889   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:41.946901   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:41.946916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:42.001481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:42.001516   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:42.015383   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:42.015412   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:42.088155   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:42.088184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:42.088201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:42.172771   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:42.172801   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:44.714665   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:44.727782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:44.727866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:44.760377   58316 cri.go:89] found id: ""
	I0520 11:47:44.760405   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.760412   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:44.760418   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:44.760466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:44.793488   58316 cri.go:89] found id: ""
	I0520 11:47:44.793509   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.793517   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:44.793522   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:44.793570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:44.827752   58316 cri.go:89] found id: ""
	I0520 11:47:44.827777   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.827786   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:44.827792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:44.827840   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:44.864415   58316 cri.go:89] found id: ""
	I0520 11:47:44.864437   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.864446   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:44.864454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:44.864514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:44.904360   58316 cri.go:89] found id: ""
	I0520 11:47:44.904382   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.904391   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:44.904396   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:44.904452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:44.939969   58316 cri.go:89] found id: ""
	I0520 11:47:44.940000   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.940010   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:44.940019   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:44.940069   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:44.979521   58316 cri.go:89] found id: ""
	I0520 11:47:44.979544   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.979552   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:44.979557   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:44.979621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:45.012946   58316 cri.go:89] found id: ""
	I0520 11:47:45.012979   58316 logs.go:276] 0 containers: []
	W0520 11:47:45.012992   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:45.013002   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:45.013015   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:45.067582   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:45.067619   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:45.081497   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:45.081529   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:45.155312   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:45.155335   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:45.155351   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:45.232403   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:45.232439   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:47.774002   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:47.789020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:47.789078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:47.826284   58316 cri.go:89] found id: ""
	I0520 11:47:47.826319   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.826331   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:47.826340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:47.826404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:47.866588   58316 cri.go:89] found id: ""
	I0520 11:47:47.866610   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.866616   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:47.866621   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:47.866675   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:47.901556   58316 cri.go:89] found id: ""
	I0520 11:47:47.901582   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.901593   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:47.901600   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:47.901658   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:47.937563   58316 cri.go:89] found id: ""
	I0520 11:47:47.937586   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.937593   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:47.937599   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:47.937656   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:47.975493   58316 cri.go:89] found id: ""
	I0520 11:47:47.975517   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.975527   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:47.975535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:47.975594   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:48.012208   58316 cri.go:89] found id: ""
	I0520 11:47:48.012232   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.012239   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:48.012245   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:48.012303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:48.046951   58316 cri.go:89] found id: ""
	I0520 11:47:48.046979   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.046989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:48.046996   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:48.047057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:48.085611   58316 cri.go:89] found id: ""
	I0520 11:47:48.085639   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.085649   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:48.085659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:48.085673   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:48.159338   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:48.159359   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:48.159373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:48.238332   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:48.238362   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:48.282316   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:48.282346   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:48.333780   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:48.333811   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:50.849007   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:50.863710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:50.863766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:50.900260   58316 cri.go:89] found id: ""
	I0520 11:47:50.900281   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.900287   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:50.900294   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:50.900342   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:50.934796   58316 cri.go:89] found id: ""
	I0520 11:47:50.934817   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.934824   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:50.934830   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:50.934875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:50.972957   58316 cri.go:89] found id: ""
	I0520 11:47:50.972983   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.972992   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:50.973000   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:50.973073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:51.009354   58316 cri.go:89] found id: ""
	I0520 11:47:51.009392   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.009402   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:51.009410   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:51.009467   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:51.046268   58316 cri.go:89] found id: ""
	I0520 11:47:51.046297   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.046308   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:51.046316   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:51.046379   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:51.083513   58316 cri.go:89] found id: ""
	I0520 11:47:51.083538   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.083548   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:51.083554   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:51.083619   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:51.126736   58316 cri.go:89] found id: ""
	I0520 11:47:51.126761   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.126778   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:51.126784   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:51.126841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:51.166149   58316 cri.go:89] found id: ""
	I0520 11:47:51.166176   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.166187   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:51.166196   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:51.166207   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:51.218946   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:51.218989   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:51.233779   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:51.233813   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:51.312266   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:51.312289   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:51.312306   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:51.393884   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:51.393911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:53.939605   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:53.952518   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:53.952585   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:53.995262   58316 cri.go:89] found id: ""
	I0520 11:47:53.995291   58316 logs.go:276] 0 containers: []
	W0520 11:47:53.995302   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:53.995309   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:53.995372   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:54.029357   58316 cri.go:89] found id: ""
	I0520 11:47:54.029379   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.029387   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:54.029393   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:54.029441   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:54.064902   58316 cri.go:89] found id: ""
	I0520 11:47:54.064937   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.064947   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:54.064954   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:54.065014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:54.102763   58316 cri.go:89] found id: ""
	I0520 11:47:54.102788   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.102795   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:54.102801   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:54.102866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:54.136345   58316 cri.go:89] found id: ""
	I0520 11:47:54.136373   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.136384   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:54.136392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:54.136456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:54.173106   58316 cri.go:89] found id: ""
	I0520 11:47:54.173135   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.173144   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:54.173151   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:54.173214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:54.209199   58316 cri.go:89] found id: ""
	I0520 11:47:54.209222   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.209229   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:54.209235   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:54.209301   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:54.247681   58316 cri.go:89] found id: ""
	I0520 11:47:54.247711   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.247722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:54.247731   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:54.247749   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:54.325604   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:54.325623   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:54.325635   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:54.405791   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:54.405832   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:54.445633   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:54.445671   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:54.495395   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:54.495424   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:57.011083   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:57.025109   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:57.025171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:57.059268   58316 cri.go:89] found id: ""
	I0520 11:47:57.059299   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.059310   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:57.059318   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:57.059371   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:57.093744   58316 cri.go:89] found id: ""
	I0520 11:47:57.093770   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.093780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:57.093788   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:57.093854   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:57.128836   58316 cri.go:89] found id: ""
	I0520 11:47:57.128868   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.128879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:57.128886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:57.128967   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:57.163858   58316 cri.go:89] found id: ""
	I0520 11:47:57.163882   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.163890   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:57.163896   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:57.163951   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:57.198684   58316 cri.go:89] found id: ""
	I0520 11:47:57.198713   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.198724   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:57.198732   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:57.198794   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:57.235121   58316 cri.go:89] found id: ""
	I0520 11:47:57.235147   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.235155   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:57.235161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:57.235208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:57.279621   58316 cri.go:89] found id: ""
	I0520 11:47:57.279641   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.279650   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:57.279655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:57.279701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:57.316292   58316 cri.go:89] found id: ""
	I0520 11:47:57.316319   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.316329   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:57.316338   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:57.316350   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:57.330013   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:57.330037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:57.405736   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:57.405762   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:57.405776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:57.483126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:57.483160   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:57.527345   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:57.527375   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.081615   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:00.094782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:00.094841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:00.133435   58316 cri.go:89] found id: ""
	I0520 11:48:00.133470   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.133486   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:00.133492   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:00.133553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:00.169402   58316 cri.go:89] found id: ""
	I0520 11:48:00.169434   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.169445   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:00.169453   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:00.169515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:00.204702   58316 cri.go:89] found id: ""
	I0520 11:48:00.204728   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.204738   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:00.204745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:00.204803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:00.246341   58316 cri.go:89] found id: ""
	I0520 11:48:00.246373   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.246383   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:00.246391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:00.246452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:00.279937   58316 cri.go:89] found id: ""
	I0520 11:48:00.279961   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.279968   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:00.279973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:00.280021   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:00.314825   58316 cri.go:89] found id: ""
	I0520 11:48:00.314853   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.314864   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:00.314870   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:00.314937   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:00.354431   58316 cri.go:89] found id: ""
	I0520 11:48:00.354460   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.354471   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:00.354478   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:00.354538   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:00.391309   58316 cri.go:89] found id: ""
	I0520 11:48:00.391335   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.391342   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:00.391350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:00.391361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:00.471502   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:00.471535   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:00.508324   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:00.508352   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.559695   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:00.559728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:00.573677   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:00.573709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:00.648102   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:03.148336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:03.162792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:03.162922   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:03.202620   58316 cri.go:89] found id: ""
	I0520 11:48:03.202656   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.202667   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:03.202674   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:03.202732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:03.243434   58316 cri.go:89] found id: ""
	I0520 11:48:03.243460   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.243468   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:03.243474   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:03.243528   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:03.278517   58316 cri.go:89] found id: ""
	I0520 11:48:03.278545   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.278553   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:03.278560   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:03.278624   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:03.317253   58316 cri.go:89] found id: ""
	I0520 11:48:03.317281   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.317291   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:03.317298   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:03.317360   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:03.352394   58316 cri.go:89] found id: ""
	I0520 11:48:03.352416   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.352425   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:03.352431   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:03.352480   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:03.393016   58316 cri.go:89] found id: ""
	I0520 11:48:03.393044   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.393055   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:03.393063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:03.393134   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:03.434436   58316 cri.go:89] found id: ""
	I0520 11:48:03.434461   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.434468   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:03.434474   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:03.434531   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:03.476703   58316 cri.go:89] found id: ""
	I0520 11:48:03.476728   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.476737   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:03.476746   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:03.476761   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:03.529959   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:03.529996   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:03.544741   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:03.544770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:03.614526   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:03.614552   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:03.614568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:03.695336   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:03.695368   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:06.233916   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:06.249026   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:06.249083   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:06.288191   58316 cri.go:89] found id: ""
	I0520 11:48:06.288223   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.288233   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:06.288246   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:06.288307   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:06.328602   58316 cri.go:89] found id: ""
	I0520 11:48:06.328639   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.328647   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:06.328653   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:06.328710   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:06.373030   58316 cri.go:89] found id: ""
	I0520 11:48:06.373053   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.373062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:06.373069   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:06.373130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:06.410520   58316 cri.go:89] found id: ""
	I0520 11:48:06.410542   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.410551   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:06.410559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:06.410625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:06.448468   58316 cri.go:89] found id: ""
	I0520 11:48:06.448493   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.448505   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:06.448511   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:06.448571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:06.507441   58316 cri.go:89] found id: ""
	I0520 11:48:06.507468   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.507477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:06.507485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:06.507545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:06.547174   58316 cri.go:89] found id: ""
	I0520 11:48:06.547201   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.547212   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:06.547220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:06.547283   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:06.580692   58316 cri.go:89] found id: ""
	I0520 11:48:06.580715   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.580722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:06.580730   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:06.580741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:06.634822   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:06.634855   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:06.649107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:06.649132   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:06.717523   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:06.717548   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:06.717562   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:06.806538   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:06.806571   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:09.347195   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:09.362597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:09.362677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:09.401314   58316 cri.go:89] found id: ""
	I0520 11:48:09.401338   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.401346   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:09.401353   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:09.401411   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:09.436394   58316 cri.go:89] found id: ""
	I0520 11:48:09.436418   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.436426   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:09.436431   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:09.436484   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:09.470407   58316 cri.go:89] found id: ""
	I0520 11:48:09.470437   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.470447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:09.470462   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:09.470514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:09.507720   58316 cri.go:89] found id: ""
	I0520 11:48:09.507747   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.507758   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:09.507763   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:09.507824   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:09.548056   58316 cri.go:89] found id: ""
	I0520 11:48:09.548084   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.548093   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:09.548106   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:09.548172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:09.585187   58316 cri.go:89] found id: ""
	I0520 11:48:09.585214   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.585222   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:09.585227   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:09.585285   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:09.622695   58316 cri.go:89] found id: ""
	I0520 11:48:09.622718   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.622725   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:09.622731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:09.622796   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:09.662751   58316 cri.go:89] found id: ""
	I0520 11:48:09.662775   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.662784   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:09.662792   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:09.662804   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:09.715669   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:09.715709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:09.729301   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:09.729333   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:09.799189   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:09.799213   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:09.799228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:09.885889   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:09.885916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:12.431556   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:12.445470   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:12.445545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:12.482161   58316 cri.go:89] found id: ""
	I0520 11:48:12.482186   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.482195   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:12.482201   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:12.482251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:12.520051   58316 cri.go:89] found id: ""
	I0520 11:48:12.520077   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.520090   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:12.520096   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:12.520143   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:12.559856   58316 cri.go:89] found id: ""
	I0520 11:48:12.559878   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.559886   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:12.559891   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:12.559947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:12.599657   58316 cri.go:89] found id: ""
	I0520 11:48:12.599679   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.599687   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:12.599693   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:12.599739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:12.640514   58316 cri.go:89] found id: ""
	I0520 11:48:12.640539   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.640547   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:12.640552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:12.640601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:12.680337   58316 cri.go:89] found id: ""
	I0520 11:48:12.680367   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.680378   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:12.680384   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:12.680433   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:12.718580   58316 cri.go:89] found id: ""
	I0520 11:48:12.718615   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.718624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:12.718629   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:12.718677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:12.760438   58316 cri.go:89] found id: ""
	I0520 11:48:12.760463   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.760471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:12.760479   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:12.760493   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:12.816564   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:12.816599   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:12.831107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:12.831130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:12.931165   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:12.931184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:12.931199   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:13.035436   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:13.035478   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:15.578999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:15.593185   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:15.593280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:15.629644   58316 cri.go:89] found id: ""
	I0520 11:48:15.629672   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.629682   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:15.629689   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:15.629749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:15.668639   58316 cri.go:89] found id: ""
	I0520 11:48:15.668670   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.668680   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:15.668688   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:15.668752   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:15.712844   58316 cri.go:89] found id: ""
	I0520 11:48:15.712868   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.712876   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:15.712881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:15.712954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:15.748969   58316 cri.go:89] found id: ""
	I0520 11:48:15.748998   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.749008   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:15.749020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:15.749085   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:15.784625   58316 cri.go:89] found id: ""
	I0520 11:48:15.784658   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.784666   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:15.784672   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:15.784724   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:15.825288   58316 cri.go:89] found id: ""
	I0520 11:48:15.825319   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.825326   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:15.825332   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:15.825390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:15.861589   58316 cri.go:89] found id: ""
	I0520 11:48:15.861617   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.861628   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:15.861635   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:15.861691   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:15.897115   58316 cri.go:89] found id: ""
	I0520 11:48:15.897143   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.897155   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:15.897166   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:15.897188   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:15.971286   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:15.971307   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:15.971323   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:16.049567   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:16.049612   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:16.090116   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:16.090145   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:16.140151   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:16.140187   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:18.655048   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:18.671663   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:18.671720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:18.721288   58316 cri.go:89] found id: ""
	I0520 11:48:18.721315   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.721333   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:18.721340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:18.721407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:18.765025   58316 cri.go:89] found id: ""
	I0520 11:48:18.765054   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.765064   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:18.765069   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:18.765135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:18.825054   58316 cri.go:89] found id: ""
	I0520 11:48:18.825083   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.825094   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:18.825110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:18.825172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:18.861528   58316 cri.go:89] found id: ""
	I0520 11:48:18.861550   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.861560   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:18.861566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:18.861630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:18.899943   58316 cri.go:89] found id: ""
	I0520 11:48:18.899970   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.899980   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:18.899986   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:18.900063   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:18.936465   58316 cri.go:89] found id: ""
	I0520 11:48:18.936497   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.936507   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:18.936514   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:18.936570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:18.970599   58316 cri.go:89] found id: ""
	I0520 11:48:18.970636   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.970645   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:18.970653   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:18.970718   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:19.004507   58316 cri.go:89] found id: ""
	I0520 11:48:19.004538   58316 logs.go:276] 0 containers: []
	W0520 11:48:19.004547   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:19.004558   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:19.004573   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:19.057482   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:19.057521   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:19.072189   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:19.072211   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:19.148331   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:19.148356   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:19.148373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:19.228421   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:19.228456   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:21.771146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:21.786547   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:21.786620   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:21.829816   58316 cri.go:89] found id: ""
	I0520 11:48:21.829846   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.829858   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:21.829865   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:21.829925   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:21.867480   58316 cri.go:89] found id: ""
	I0520 11:48:21.867502   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.867510   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:21.867515   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:21.867571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:21.904737   58316 cri.go:89] found id: ""
	I0520 11:48:21.904760   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.904767   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:21.904772   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:21.904841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:21.940431   58316 cri.go:89] found id: ""
	I0520 11:48:21.940457   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.940465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:21.940482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:21.940543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:21.976054   58316 cri.go:89] found id: ""
	I0520 11:48:21.976087   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.976094   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:21.976100   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:21.976158   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:22.020035   58316 cri.go:89] found id: ""
	I0520 11:48:22.020104   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.020116   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:22.020123   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:22.020189   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:22.060981   58316 cri.go:89] found id: ""
	I0520 11:48:22.061002   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.061010   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:22.061015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:22.061059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:22.102854   58316 cri.go:89] found id: ""
	I0520 11:48:22.102875   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.102882   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:22.102892   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:22.102906   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:22.158901   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:22.158933   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.172825   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:22.172850   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:22.246602   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:22.246625   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:22.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:22.325862   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:22.325899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:24.868449   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:24.882851   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:24.882921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:24.922574   58316 cri.go:89] found id: ""
	I0520 11:48:24.922600   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.922610   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:24.922617   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:24.922677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:24.958786   58316 cri.go:89] found id: ""
	I0520 11:48:24.958817   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.958845   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:24.958853   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:24.958906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:24.993346   58316 cri.go:89] found id: ""
	I0520 11:48:24.993375   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.993384   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:24.993391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:24.993450   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:25.026840   58316 cri.go:89] found id: ""
	I0520 11:48:25.026863   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.026870   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:25.026875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:25.026920   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:25.065675   58316 cri.go:89] found id: ""
	I0520 11:48:25.065710   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.065719   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:25.065727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:25.065791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:25.100955   58316 cri.go:89] found id: ""
	I0520 11:48:25.100986   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.100997   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:25.101004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:25.101062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:25.134516   58316 cri.go:89] found id: ""
	I0520 11:48:25.134544   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.134554   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:25.134562   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:25.134626   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:25.174455   58316 cri.go:89] found id: ""
	I0520 11:48:25.174482   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.174491   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:25.174501   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:25.174527   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:25.245557   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:25.245579   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:25.245592   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:25.324197   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:25.324228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:25.362963   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:25.362995   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:25.420778   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:25.420807   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:27.935757   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:27.949787   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:27.949873   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:27.985142   58316 cri.go:89] found id: ""
	I0520 11:48:27.985174   58316 logs.go:276] 0 containers: []
	W0520 11:48:27.985185   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:27.985191   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:27.985251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:28.020044   58316 cri.go:89] found id: ""
	I0520 11:48:28.020074   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.020082   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:28.020088   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:28.020135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:28.057075   58316 cri.go:89] found id: ""
	I0520 11:48:28.057114   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.057125   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:28.057136   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:28.057182   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:28.093505   58316 cri.go:89] found id: ""
	I0520 11:48:28.093527   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.093534   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:28.093540   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:28.093596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:28.129402   58316 cri.go:89] found id: ""
	I0520 11:48:28.129431   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.129440   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:28.129447   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:28.129505   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:28.166558   58316 cri.go:89] found id: ""
	I0520 11:48:28.166582   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.166590   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:28.166601   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:28.166649   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:28.206833   58316 cri.go:89] found id: ""
	I0520 11:48:28.206863   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.206873   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:28.206881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:28.206943   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:28.246571   58316 cri.go:89] found id: ""
	I0520 11:48:28.246605   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.246617   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:28.246628   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:28.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:28.297849   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:28.297882   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:28.311578   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:28.311603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:28.387379   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:28.387399   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:28.387410   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:28.466506   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:28.466540   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.007382   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:31.021671   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:31.021751   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:31.059965   58316 cri.go:89] found id: ""
	I0520 11:48:31.059987   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.059996   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:31.060004   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:31.060062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:31.095664   58316 cri.go:89] found id: ""
	I0520 11:48:31.095686   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.095693   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:31.095699   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:31.095749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:31.131510   58316 cri.go:89] found id: ""
	I0520 11:48:31.131531   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.131538   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:31.131544   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:31.131596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:31.166962   58316 cri.go:89] found id: ""
	I0520 11:48:31.167000   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.167011   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:31.167018   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:31.167082   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:31.202831   58316 cri.go:89] found id: ""
	I0520 11:48:31.202856   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.202863   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:31.202868   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:31.202927   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:31.239365   58316 cri.go:89] found id: ""
	I0520 11:48:31.239397   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.239409   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:31.239417   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:31.239477   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:31.276613   58316 cri.go:89] found id: ""
	I0520 11:48:31.276641   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.276651   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:31.276658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:31.276720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:31.312008   58316 cri.go:89] found id: ""
	I0520 11:48:31.312040   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.312047   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:31.312056   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:31.312066   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:31.394355   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:31.394383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.439251   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:31.439277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:31.494313   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:31.494341   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:31.508675   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:31.508698   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:31.576022   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:34.076333   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:34.089640   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:34.089704   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:34.124848   58316 cri.go:89] found id: ""
	I0520 11:48:34.124869   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.124876   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:34.124881   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:34.124948   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:34.159885   58316 cri.go:89] found id: ""
	I0520 11:48:34.159911   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.159920   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:34.159925   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:34.159988   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:34.194029   58316 cri.go:89] found id: ""
	I0520 11:48:34.194051   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.194058   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:34.194063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:34.194135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:34.231412   58316 cri.go:89] found id: ""
	I0520 11:48:34.231435   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.231442   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:34.231448   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:34.231497   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:34.268679   58316 cri.go:89] found id: ""
	I0520 11:48:34.268709   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.268721   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:34.268729   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:34.268795   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:34.313585   58316 cri.go:89] found id: ""
	I0520 11:48:34.313607   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.313613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:34.313618   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:34.313681   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:34.348685   58316 cri.go:89] found id: ""
	I0520 11:48:34.348712   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.348723   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:34.348730   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:34.348788   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:34.393438   58316 cri.go:89] found id: ""
	I0520 11:48:34.393462   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.393471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:34.393480   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:34.393495   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:34.468828   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:34.468859   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:34.468877   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:34.553796   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:34.553827   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:34.594968   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:34.594992   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:34.647814   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:34.647847   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:37.161573   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:37.176133   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:37.176214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:37.214694   58316 cri.go:89] found id: ""
	I0520 11:48:37.214719   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.214729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:37.214736   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:37.214803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:37.259745   58316 cri.go:89] found id: ""
	I0520 11:48:37.259772   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.259783   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:37.259790   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:37.259852   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:37.299887   58316 cri.go:89] found id: ""
	I0520 11:48:37.299920   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.299931   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:37.299938   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:37.300003   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:37.337428   58316 cri.go:89] found id: ""
	I0520 11:48:37.337456   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.337465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:37.337472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:37.337532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:37.379635   58316 cri.go:89] found id: ""
	I0520 11:48:37.379662   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.379672   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:37.379680   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:37.379739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:37.421071   58316 cri.go:89] found id: ""
	I0520 11:48:37.421104   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.421112   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:37.421118   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:37.421164   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:37.460130   58316 cri.go:89] found id: ""
	I0520 11:48:37.460154   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.460162   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:37.460168   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:37.460225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:37.497028   58316 cri.go:89] found id: ""
	I0520 11:48:37.497053   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.497063   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:37.497074   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:37.497089   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:37.550632   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:37.550659   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:37.566172   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:37.566201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:37.645132   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:37.645150   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:37.645163   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:37.720923   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:37.720966   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:40.260330   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:40.275233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:40.275294   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:40.310450   58316 cri.go:89] found id: ""
	I0520 11:48:40.310479   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.310488   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:40.310494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:40.310543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:40.346171   58316 cri.go:89] found id: ""
	I0520 11:48:40.346200   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.346211   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:40.346217   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:40.346278   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:40.382644   58316 cri.go:89] found id: ""
	I0520 11:48:40.382666   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.382676   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:40.382684   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:40.382739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:40.422950   58316 cri.go:89] found id: ""
	I0520 11:48:40.422974   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.422984   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:40.422990   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:40.423050   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:40.458818   58316 cri.go:89] found id: ""
	I0520 11:48:40.458864   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.458876   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:40.458884   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:40.458935   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:40.496438   58316 cri.go:89] found id: ""
	I0520 11:48:40.496462   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.496471   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:40.496476   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:40.496530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:40.532432   58316 cri.go:89] found id: ""
	I0520 11:48:40.532457   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.532465   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:40.532471   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:40.532530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:40.572628   58316 cri.go:89] found id: ""
	I0520 11:48:40.572652   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.572660   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:40.572667   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:40.572678   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:40.623158   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:40.623189   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:40.636609   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:40.636633   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:40.710836   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:40.710866   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:40.710880   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:40.795386   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:40.795420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:43.334999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:43.348481   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:43.348553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:43.385528   58316 cri.go:89] found id: ""
	I0520 11:48:43.385554   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.385564   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:43.385571   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:43.385630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:43.424628   58316 cri.go:89] found id: ""
	I0520 11:48:43.424659   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.424676   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:43.424685   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:43.424746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:43.464043   58316 cri.go:89] found id: ""
	I0520 11:48:43.464075   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.464091   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:43.464097   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:43.464171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:43.501860   58316 cri.go:89] found id: ""
	I0520 11:48:43.501889   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.501899   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:43.501907   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:43.501970   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:43.541881   58316 cri.go:89] found id: ""
	I0520 11:48:43.541910   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.541919   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:43.541926   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:43.541992   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:43.578759   58316 cri.go:89] found id: ""
	I0520 11:48:43.578782   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.578789   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:43.578794   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:43.578842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:43.618012   58316 cri.go:89] found id: ""
	I0520 11:48:43.618042   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.618050   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:43.618058   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:43.618117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.654219   58316 cri.go:89] found id: ""
	I0520 11:48:43.654243   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.654250   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:43.654258   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:43.654273   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:43.667782   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:43.667805   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:43.740111   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:43.740138   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:43.740155   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:43.823936   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:43.823974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:43.865045   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:43.865070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.421300   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:46.434802   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:46.434878   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:46.469361   58316 cri.go:89] found id: ""
	I0520 11:48:46.469392   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.469401   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:46.469407   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:46.469460   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:46.503922   58316 cri.go:89] found id: ""
	I0520 11:48:46.503950   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.503960   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:46.503968   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:46.504025   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:46.542916   58316 cri.go:89] found id: ""
	I0520 11:48:46.542942   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.542950   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:46.542955   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:46.543011   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:46.581532   58316 cri.go:89] found id: ""
	I0520 11:48:46.581550   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.581558   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:46.581563   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:46.581623   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:46.618650   58316 cri.go:89] found id: ""
	I0520 11:48:46.618677   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.618684   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:46.618689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:46.618740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:46.655409   58316 cri.go:89] found id: ""
	I0520 11:48:46.655431   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.655439   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:46.655445   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:46.655491   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:46.690494   58316 cri.go:89] found id: ""
	I0520 11:48:46.690519   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.690526   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:46.690531   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:46.690579   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:46.724684   58316 cri.go:89] found id: ""
	I0520 11:48:46.724715   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.724725   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:46.724737   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:46.724752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:46.807899   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:46.807931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:46.850852   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:46.850883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.903474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:46.903502   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:46.917073   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:46.917105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:47.006606   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.507632   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:49.521080   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:49.521160   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:49.558639   58316 cri.go:89] found id: ""
	I0520 11:48:49.558671   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.558678   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:49.558684   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:49.558732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:49.595470   58316 cri.go:89] found id: ""
	I0520 11:48:49.595497   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.595506   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:49.595513   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:49.595573   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:49.630164   58316 cri.go:89] found id: ""
	I0520 11:48:49.630193   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.630204   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:49.630211   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:49.630272   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:49.664835   58316 cri.go:89] found id: ""
	I0520 11:48:49.664863   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.664873   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:49.664882   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:49.664954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:49.700428   58316 cri.go:89] found id: ""
	I0520 11:48:49.700483   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.700497   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:49.700504   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:49.700569   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:49.737184   58316 cri.go:89] found id: ""
	I0520 11:48:49.737216   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.737226   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:49.737234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:49.737303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:49.772577   58316 cri.go:89] found id: ""
	I0520 11:48:49.772601   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.772611   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:49.772617   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:49.772678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:49.806926   58316 cri.go:89] found id: ""
	I0520 11:48:49.806954   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.806966   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:49.806977   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:49.806991   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:49.861208   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:49.861242   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:49.875054   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:49.875084   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:49.949556   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.949582   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:49.949596   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:50.032181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:50.032226   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:52.578486   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:52.592465   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:52.592534   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:52.627795   58316 cri.go:89] found id: ""
	I0520 11:48:52.627828   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.627838   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:52.627845   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:52.627906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:52.663863   58316 cri.go:89] found id: ""
	I0520 11:48:52.663889   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.663896   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:52.663902   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:52.663956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:52.697912   58316 cri.go:89] found id: ""
	I0520 11:48:52.697937   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.697944   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:52.697950   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:52.698010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:52.751379   58316 cri.go:89] found id: ""
	I0520 11:48:52.751401   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.751408   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:52.751413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:52.751473   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:52.786683   58316 cri.go:89] found id: ""
	I0520 11:48:52.786709   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.786716   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:52.786721   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:52.786782   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:52.821720   58316 cri.go:89] found id: ""
	I0520 11:48:52.821747   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.821758   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:52.821765   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:52.821828   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:52.858569   58316 cri.go:89] found id: ""
	I0520 11:48:52.858596   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.858607   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:52.858615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:52.858671   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:52.897524   58316 cri.go:89] found id: ""
	I0520 11:48:52.897551   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.897560   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:52.897569   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:52.897584   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:52.977792   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:52.977816   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:52.977831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:53.064964   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:53.065009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:53.108613   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:53.108649   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:53.164692   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:53.164728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:55.679593   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:55.693505   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:55.693576   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:55.730013   58316 cri.go:89] found id: ""
	I0520 11:48:55.730043   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.730053   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:55.730061   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:55.730121   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:55.769739   58316 cri.go:89] found id: ""
	I0520 11:48:55.769769   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.769780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:55.769787   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:55.769842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:55.809831   58316 cri.go:89] found id: ""
	I0520 11:48:55.809867   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.809878   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:55.809886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:55.809946   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:55.843487   58316 cri.go:89] found id: ""
	I0520 11:48:55.843513   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.843521   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:55.843527   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:55.843586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:55.883327   58316 cri.go:89] found id: ""
	I0520 11:48:55.883352   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.883362   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:55.883372   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:55.883430   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:55.921507   58316 cri.go:89] found id: ""
	I0520 11:48:55.921534   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.921544   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:55.921552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:55.921612   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:55.957042   58316 cri.go:89] found id: ""
	I0520 11:48:55.957066   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.957073   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:55.957079   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:55.957141   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:55.990871   58316 cri.go:89] found id: ""
	I0520 11:48:55.990899   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.990907   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:55.990916   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:55.990931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:56.041345   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:56.041373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:56.055628   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:56.055653   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:56.128845   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:56.128867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:56.128883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:56.218181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:56.218215   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:58.760650   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:58.774991   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:58.775048   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:58.817777   58316 cri.go:89] found id: ""
	I0520 11:48:58.817807   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.817819   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:58.817826   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:58.817885   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:58.855909   58316 cri.go:89] found id: ""
	I0520 11:48:58.855934   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.855942   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:58.855949   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:58.856010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:58.893369   58316 cri.go:89] found id: ""
	I0520 11:48:58.893396   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.893405   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:58.893413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:58.893475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:58.929975   58316 cri.go:89] found id: ""
	I0520 11:48:58.930001   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.930010   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:58.930017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:58.930067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:58.963865   58316 cri.go:89] found id: ""
	I0520 11:48:58.963890   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.963898   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:58.963904   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:58.963954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:59.000604   58316 cri.go:89] found id: ""
	I0520 11:48:59.000624   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.000631   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:59.000637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:59.000685   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:59.037408   58316 cri.go:89] found id: ""
	I0520 11:48:59.037433   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.037442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:59.037452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:59.037512   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:59.073087   58316 cri.go:89] found id: ""
	I0520 11:48:59.073109   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.073118   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:59.073126   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:59.073142   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:59.124885   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:59.124916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:59.154020   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:59.154047   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:59.277149   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:59.277199   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:59.277216   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:59.363126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:59.363157   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:01.906903   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:01.923346   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:01.923407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:01.961612   58316 cri.go:89] found id: ""
	I0520 11:49:01.961635   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.961643   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:01.961648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:01.961702   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:01.997803   58316 cri.go:89] found id: ""
	I0520 11:49:01.997830   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.997839   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:01.997844   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:01.997894   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:02.032142   58316 cri.go:89] found id: ""
	I0520 11:49:02.032169   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.032177   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:02.032182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:02.032226   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:02.070641   58316 cri.go:89] found id: ""
	I0520 11:49:02.070664   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.070671   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:02.070676   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:02.070729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:02.107270   58316 cri.go:89] found id: ""
	I0520 11:49:02.107296   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.107302   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:02.107308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:02.107356   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:02.144676   58316 cri.go:89] found id: ""
	I0520 11:49:02.144702   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.144709   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:02.144717   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:02.144798   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:02.184382   58316 cri.go:89] found id: ""
	I0520 11:49:02.184403   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.184410   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:02.184416   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:02.184466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:02.223110   58316 cri.go:89] found id: ""
	I0520 11:49:02.223140   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.223152   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:02.223164   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:02.223181   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:02.236272   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:02.236305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:02.312046   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:02.312065   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:02.312076   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:02.397284   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:02.397313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:02.438720   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:02.438755   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:04.988565   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:05.005842   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:05.005906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:05.071468   58316 cri.go:89] found id: ""
	I0520 11:49:05.071496   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.071505   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:05.071515   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:05.071574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:05.125767   58316 cri.go:89] found id: ""
	I0520 11:49:05.125793   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.125801   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:05.125806   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:05.125868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:05.162651   58316 cri.go:89] found id: ""
	I0520 11:49:05.162676   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.162684   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:05.162689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:05.162746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:05.205140   58316 cri.go:89] found id: ""
	I0520 11:49:05.205168   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.205176   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:05.205182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:05.205245   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:05.245187   58316 cri.go:89] found id: ""
	I0520 11:49:05.245209   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.245216   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:05.245221   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:05.245280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:05.287698   58316 cri.go:89] found id: ""
	I0520 11:49:05.287728   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.287739   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:05.287746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:05.287809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:05.325607   58316 cri.go:89] found id: ""
	I0520 11:49:05.325637   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.325649   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:05.325655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:05.325716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:05.360662   58316 cri.go:89] found id: ""
	I0520 11:49:05.360690   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.360700   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:05.360711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:05.360725   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:05.402342   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:05.402376   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:05.454403   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:05.454436   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:05.468882   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:05.468912   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:05.546520   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:05.546546   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:05.546567   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.128785   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:08.144963   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:08.145053   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:08.190121   58316 cri.go:89] found id: ""
	I0520 11:49:08.190154   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.190165   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:08.190174   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:08.190253   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:08.227310   58316 cri.go:89] found id: ""
	I0520 11:49:08.227334   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.227341   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:08.227346   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:08.227390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:08.262606   58316 cri.go:89] found id: ""
	I0520 11:49:08.262631   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.262639   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:08.262646   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:08.262706   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:08.297061   58316 cri.go:89] found id: ""
	I0520 11:49:08.297092   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.297103   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:08.297110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:08.297170   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:08.337356   58316 cri.go:89] found id: ""
	I0520 11:49:08.337394   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.337410   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:08.337418   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:08.337481   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:08.372576   58316 cri.go:89] found id: ""
	I0520 11:49:08.372603   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.372613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:08.372620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:08.372677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:08.409394   58316 cri.go:89] found id: ""
	I0520 11:49:08.409427   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.409442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:08.409449   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:08.409562   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:08.445119   58316 cri.go:89] found id: ""
	I0520 11:49:08.445145   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.445153   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:08.445161   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:08.445171   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:08.457873   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:08.457899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:08.530873   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:08.530902   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:08.530920   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.609655   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:08.609695   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:08.656568   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:08.656601   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.209318   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:11.223883   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:11.223958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:11.260006   58316 cri.go:89] found id: ""
	I0520 11:49:11.260047   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.260060   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:11.260068   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:11.260129   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:11.296348   58316 cri.go:89] found id: ""
	I0520 11:49:11.296375   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.296385   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:11.296392   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:11.296452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:11.336546   58316 cri.go:89] found id: ""
	I0520 11:49:11.336573   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.336583   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:11.336590   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:11.336652   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:11.373216   58316 cri.go:89] found id: ""
	I0520 11:49:11.373244   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.373254   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:11.373261   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:11.373320   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:11.411555   58316 cri.go:89] found id: ""
	I0520 11:49:11.411586   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.411596   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:11.411604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:11.411663   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:11.446390   58316 cri.go:89] found id: ""
	I0520 11:49:11.446419   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.446428   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:11.446433   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:11.446489   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:11.481584   58316 cri.go:89] found id: ""
	I0520 11:49:11.481613   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.481624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:11.481637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:11.481701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:11.518092   58316 cri.go:89] found id: ""
	I0520 11:49:11.518121   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.518131   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:11.518143   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:11.518158   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.569824   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:11.569859   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:11.583600   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:11.583620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:11.663761   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:11.663799   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:11.663817   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:11.743430   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:11.743462   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:14.289940   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:14.307574   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:14.307645   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:14.349602   58316 cri.go:89] found id: ""
	I0520 11:49:14.349628   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.349637   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:14.349644   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:14.349701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:14.391484   58316 cri.go:89] found id: ""
	I0520 11:49:14.391508   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.391516   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:14.391521   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:14.391581   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:14.437859   58316 cri.go:89] found id: ""
	I0520 11:49:14.437884   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.437894   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:14.437902   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:14.437965   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:14.474353   58316 cri.go:89] found id: ""
	I0520 11:49:14.474379   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.474387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:14.474392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:14.474439   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:14.517430   58316 cri.go:89] found id: ""
	I0520 11:49:14.517459   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.517466   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:14.517472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:14.517523   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:14.552166   58316 cri.go:89] found id: ""
	I0520 11:49:14.552196   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.552207   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:14.552220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:14.552280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:14.589295   58316 cri.go:89] found id: ""
	I0520 11:49:14.589322   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.589330   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:14.589336   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:14.589396   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:14.626221   58316 cri.go:89] found id: ""
	I0520 11:49:14.626244   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.626251   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:14.626259   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:14.626271   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:14.681867   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:14.681899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:14.695328   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:14.695355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:14.773077   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:14.773102   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:14.773118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:14.857344   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:14.857378   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:17.402259   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:17.415895   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:17.415956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:17.458403   58316 cri.go:89] found id: ""
	I0520 11:49:17.458432   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.458443   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:17.458450   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:17.458511   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:17.495603   58316 cri.go:89] found id: ""
	I0520 11:49:17.495633   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.495650   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:17.495657   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:17.495716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:17.535416   58316 cri.go:89] found id: ""
	I0520 11:49:17.535440   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.535448   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:17.535452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:17.535515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:17.574842   58316 cri.go:89] found id: ""
	I0520 11:49:17.574868   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.574875   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:17.574880   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:17.574933   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:17.608744   58316 cri.go:89] found id: ""
	I0520 11:49:17.608773   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.608783   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:17.608791   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:17.608864   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:17.645101   58316 cri.go:89] found id: ""
	I0520 11:49:17.645126   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.645134   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:17.645140   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:17.645199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:17.682620   58316 cri.go:89] found id: ""
	I0520 11:49:17.682651   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.682661   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:17.682669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:17.682725   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:17.718392   58316 cri.go:89] found id: ""
	I0520 11:49:17.718414   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.718422   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:17.718430   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:17.718441   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:17.772103   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:17.772138   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:17.786071   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:17.786102   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:17.859596   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:17.859617   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:17.859634   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:17.938723   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:17.938754   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:20.480539   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:20.493677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:20.493745   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:20.528213   58316 cri.go:89] found id: ""
	I0520 11:49:20.528241   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.528251   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:20.528258   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:20.528344   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:20.564821   58316 cri.go:89] found id: ""
	I0520 11:49:20.564844   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.564853   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:20.564860   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:20.564921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:20.599015   58316 cri.go:89] found id: ""
	I0520 11:49:20.599051   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.599063   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:20.599070   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:20.599125   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:20.637115   58316 cri.go:89] found id: ""
	I0520 11:49:20.637160   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.637169   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:20.637175   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:20.637221   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:20.672699   58316 cri.go:89] found id: ""
	I0520 11:49:20.672728   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.672738   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:20.672745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:20.672809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:20.709278   58316 cri.go:89] found id: ""
	I0520 11:49:20.709309   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.709321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:20.709327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:20.709394   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:20.743159   58316 cri.go:89] found id: ""
	I0520 11:49:20.743182   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.743188   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:20.743194   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:20.743239   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:20.780848   58316 cri.go:89] found id: ""
	I0520 11:49:20.780878   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.780890   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:20.780902   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:20.780917   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:20.834178   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:20.834206   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:20.848026   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:20.848049   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:20.926067   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:20.926088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:20.926103   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:21.003153   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:21.003184   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:23.545830   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:23.560697   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:23.560781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:23.595691   58316 cri.go:89] found id: ""
	I0520 11:49:23.595719   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.595729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:23.595737   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:23.595805   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:23.634526   58316 cri.go:89] found id: ""
	I0520 11:49:23.634555   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.634565   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:23.634573   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:23.634631   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:23.670101   58316 cri.go:89] found id: ""
	I0520 11:49:23.670144   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.670155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:23.670169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:23.670228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:23.704352   58316 cri.go:89] found id: ""
	I0520 11:49:23.704376   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.704384   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:23.704389   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:23.704435   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:23.745633   58316 cri.go:89] found id: ""
	I0520 11:49:23.745657   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.745664   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:23.745669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:23.745729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:23.784631   58316 cri.go:89] found id: ""
	I0520 11:49:23.784659   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.784671   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:23.784678   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:23.784742   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:23.819884   58316 cri.go:89] found id: ""
	I0520 11:49:23.819911   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.819918   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:23.819924   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:23.819976   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:23.857396   58316 cri.go:89] found id: ""
	I0520 11:49:23.857427   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.857436   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:23.857444   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:23.857455   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:23.931405   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:23.931428   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:23.931445   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:24.004717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:24.004752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:24.047465   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:24.047499   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:24.098918   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:24.098950   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:26.613820   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:26.627313   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:26.627386   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:26.661418   58316 cri.go:89] found id: ""
	I0520 11:49:26.661450   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.661460   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:26.661467   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:26.661532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:26.697938   58316 cri.go:89] found id: ""
	I0520 11:49:26.699437   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.699450   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:26.699456   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:26.699506   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:26.735017   58316 cri.go:89] found id: ""
	I0520 11:49:26.735053   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.735062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:26.735066   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:26.735115   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:26.771318   58316 cri.go:89] found id: ""
	I0520 11:49:26.771345   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.771355   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:26.771360   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:26.771410   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:26.806365   58316 cri.go:89] found id: ""
	I0520 11:49:26.806396   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.806408   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:26.806415   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:26.806468   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:26.844289   58316 cri.go:89] found id: ""
	I0520 11:49:26.844313   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.844321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:26.844327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:26.844373   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:26.883496   58316 cri.go:89] found id: ""
	I0520 11:49:26.883523   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.883534   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:26.883541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:26.883600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:26.918427   58316 cri.go:89] found id: ""
	I0520 11:49:26.918451   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.918458   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:26.918465   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:26.918477   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:26.986423   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:26.986444   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:26.986457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:27.066909   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:27.066938   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.110874   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:27.110911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:27.167481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:27.167520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:29.682559   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:29.695460   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:29.695520   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:29.731189   58316 cri.go:89] found id: ""
	I0520 11:49:29.731218   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.731230   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:29.731238   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:29.731297   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:29.766067   58316 cri.go:89] found id: ""
	I0520 11:49:29.766109   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.766120   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:29.766131   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:29.766200   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:29.800840   58316 cri.go:89] found id: ""
	I0520 11:49:29.800868   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.800879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:29.800886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:29.800959   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:29.838971   58316 cri.go:89] found id: ""
	I0520 11:49:29.838999   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.839009   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:29.839017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:29.839078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:29.873954   58316 cri.go:89] found id: ""
	I0520 11:49:29.873977   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.873987   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:29.873994   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:29.874051   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:29.912574   58316 cri.go:89] found id: ""
	I0520 11:49:29.912602   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.912612   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:29.912620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:29.912687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:29.945949   58316 cri.go:89] found id: ""
	I0520 11:49:29.945980   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.945989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:29.945995   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:29.946043   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:29.979804   58316 cri.go:89] found id: ""
	I0520 11:49:29.979830   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.979847   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:29.979856   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:29.979868   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:30.027811   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:30.027844   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:30.043082   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:30.043115   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:30.114796   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:30.114827   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:30.114842   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:30.196523   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:30.196561   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:32.737682   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:32.750903   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:32.750979   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:32.786874   58316 cri.go:89] found id: ""
	I0520 11:49:32.786897   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.786907   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:32.786915   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:32.786968   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:32.821758   58316 cri.go:89] found id: ""
	I0520 11:49:32.821783   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.821794   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:32.821800   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:32.821855   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:32.857680   58316 cri.go:89] found id: ""
	I0520 11:49:32.857700   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.857707   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:32.857712   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:32.857756   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:32.897970   58316 cri.go:89] found id: ""
	I0520 11:49:32.897990   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.897999   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:32.898004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:32.898059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:32.932971   58316 cri.go:89] found id: ""
	I0520 11:49:32.932999   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.933007   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:32.933015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:32.933067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:32.968103   58316 cri.go:89] found id: ""
	I0520 11:49:32.968132   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.968140   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:32.968147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:32.968199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:33.006101   58316 cri.go:89] found id: ""
	I0520 11:49:33.006125   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.006132   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:33.006137   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:33.006184   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:33.043611   58316 cri.go:89] found id: ""
	I0520 11:49:33.043639   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.043650   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:33.043659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:33.043677   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:33.119735   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.119758   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:33.119773   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:33.199361   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:33.199408   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:33.263421   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:33.263452   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:33.315644   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:33.315674   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:35.831034   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:35.845272   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:35.845343   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:35.882951   58316 cri.go:89] found id: ""
	I0520 11:49:35.882980   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.882988   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:35.882994   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:35.883042   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:35.919528   58316 cri.go:89] found id: ""
	I0520 11:49:35.919555   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.919564   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:35.919569   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:35.919618   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:35.957880   58316 cri.go:89] found id: ""
	I0520 11:49:35.957904   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.957912   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:35.957918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:35.957964   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:35.995267   58316 cri.go:89] found id: ""
	I0520 11:49:35.995291   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.995299   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:35.995308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:35.995367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:36.032979   58316 cri.go:89] found id: ""
	I0520 11:49:36.033011   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.033022   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:36.033030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:36.033092   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:36.071270   58316 cri.go:89] found id: ""
	I0520 11:49:36.071293   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.071303   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:36.071311   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:36.071434   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:36.105983   58316 cri.go:89] found id: ""
	I0520 11:49:36.106014   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.106024   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:36.106032   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:36.106094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:36.142978   58316 cri.go:89] found id: ""
	I0520 11:49:36.143006   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.143026   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:36.143037   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:36.143061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:36.227019   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:36.227055   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:36.269608   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:36.269646   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:36.321974   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:36.322009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:36.339332   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:36.339361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:36.417118   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:38.918143   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:38.933253   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:38.933325   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:38.988894   58316 cri.go:89] found id: ""
	I0520 11:49:38.988940   58316 logs.go:276] 0 containers: []
	W0520 11:49:38.988952   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:38.988961   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:38.989030   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:39.025902   58316 cri.go:89] found id: ""
	I0520 11:49:39.025928   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.025938   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:39.025945   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:39.026007   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:39.061920   58316 cri.go:89] found id: ""
	I0520 11:49:39.061945   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.061954   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:39.061961   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:39.062018   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:39.098125   58316 cri.go:89] found id: ""
	I0520 11:49:39.098152   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.098162   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:39.098169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:39.098229   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:39.139286   58316 cri.go:89] found id: ""
	I0520 11:49:39.139318   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.139328   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:39.139335   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:39.139404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:39.174437   58316 cri.go:89] found id: ""
	I0520 11:49:39.174468   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.174477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:39.174485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:39.174551   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:39.216176   58316 cri.go:89] found id: ""
	I0520 11:49:39.216211   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.216221   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:39.216228   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:39.216290   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:39.254213   58316 cri.go:89] found id: ""
	I0520 11:49:39.254235   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.254243   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:39.254252   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:39.254263   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:39.334325   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:39.334350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:39.334367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:39.411799   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:39.411830   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:39.455093   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:39.455118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:39.507557   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:39.507594   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:42.021921   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:42.035757   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:42.035829   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:42.072072   58316 cri.go:89] found id: ""
	I0520 11:49:42.072103   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.072112   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:42.072118   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:42.072180   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:42.108038   58316 cri.go:89] found id: ""
	I0520 11:49:42.108061   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.108068   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:42.108074   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:42.108130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:42.146464   58316 cri.go:89] found id: ""
	I0520 11:49:42.146492   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.146503   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:42.146510   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:42.146578   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:42.182566   58316 cri.go:89] found id: ""
	I0520 11:49:42.182598   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.182608   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:42.182615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:42.182688   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:42.220159   58316 cri.go:89] found id: ""
	I0520 11:49:42.220191   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.220201   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:42.220209   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:42.220268   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:42.255952   58316 cri.go:89] found id: ""
	I0520 11:49:42.255975   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.255982   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:42.255988   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:42.256036   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:42.294347   58316 cri.go:89] found id: ""
	I0520 11:49:42.294376   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.294387   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:42.294394   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:42.294456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:42.332019   58316 cri.go:89] found id: ""
	I0520 11:49:42.332043   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.332050   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:42.332058   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:42.332074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.408479   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:42.408520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:42.445579   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:42.445620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:42.496278   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:42.496312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:42.510258   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:42.510292   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:42.582179   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.082674   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:45.096690   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:45.096749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:45.132031   58316 cri.go:89] found id: ""
	I0520 11:49:45.132054   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.132068   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:45.132074   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:45.132126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:45.167575   58316 cri.go:89] found id: ""
	I0520 11:49:45.167601   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.167612   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:45.167619   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:45.167679   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:45.202021   58316 cri.go:89] found id: ""
	I0520 11:49:45.202047   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.202054   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:45.202059   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:45.202113   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:45.236630   58316 cri.go:89] found id: ""
	I0520 11:49:45.236659   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.236669   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:45.236677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:45.236740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:45.272509   58316 cri.go:89] found id: ""
	I0520 11:49:45.272536   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.272543   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:45.272549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:45.272606   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:45.308221   58316 cri.go:89] found id: ""
	I0520 11:49:45.308242   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.308250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:45.308256   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:45.308298   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:45.344202   58316 cri.go:89] found id: ""
	I0520 11:49:45.344226   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.344233   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:45.344239   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:45.344299   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:45.392276   58316 cri.go:89] found id: ""
	I0520 11:49:45.392304   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.392315   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:45.392325   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:45.392336   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:45.451325   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:45.451355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:45.509835   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:45.509867   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:45.523790   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:45.523815   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:45.599837   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.599867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:45.599886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:48.175148   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:48.188998   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:48.189102   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:48.225492   58316 cri.go:89] found id: ""
	I0520 11:49:48.225528   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.225538   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:48.225548   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:48.225611   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:48.259197   58316 cri.go:89] found id: ""
	I0520 11:49:48.259221   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.259231   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:48.259237   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:48.259296   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:48.298760   58316 cri.go:89] found id: ""
	I0520 11:49:48.298786   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.298796   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:48.298803   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:48.298876   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:48.339188   58316 cri.go:89] found id: ""
	I0520 11:49:48.339217   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.339227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:48.339234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:48.339302   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:48.378659   58316 cri.go:89] found id: ""
	I0520 11:49:48.378689   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.378699   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:48.378706   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:48.378763   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:48.417637   58316 cri.go:89] found id: ""
	I0520 11:49:48.417664   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.417674   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:48.417681   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:48.417757   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:48.461412   58316 cri.go:89] found id: ""
	I0520 11:49:48.461439   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.461450   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:48.461457   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:48.461521   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:48.502919   58316 cri.go:89] found id: ""
	I0520 11:49:48.502942   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.502949   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:48.502958   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:48.502974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:48.557007   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:48.557037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:48.571457   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:48.571485   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:48.642339   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:48.642362   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:48.642379   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:48.720711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:48.720750   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:51.262685   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:51.276276   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:51.276352   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:51.313139   58316 cri.go:89] found id: ""
	I0520 11:49:51.313169   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.313178   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:51.313184   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:51.313233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:51.349742   58316 cri.go:89] found id: ""
	I0520 11:49:51.349771   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.349781   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:51.349789   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:51.349847   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:51.388819   58316 cri.go:89] found id: ""
	I0520 11:49:51.388856   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.388868   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:51.388875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:51.388958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:51.424734   58316 cri.go:89] found id: ""
	I0520 11:49:51.424769   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.424779   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:51.424789   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:51.424867   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:51.464135   58316 cri.go:89] found id: ""
	I0520 11:49:51.464163   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.464172   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:51.464178   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:51.464241   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:51.505948   58316 cri.go:89] found id: ""
	I0520 11:49:51.505974   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.505984   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:51.505992   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:51.506057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:51.542350   58316 cri.go:89] found id: ""
	I0520 11:49:51.542378   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.542389   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:51.542397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:51.542461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:51.577991   58316 cri.go:89] found id: ""
	I0520 11:49:51.578022   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.578031   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:51.578043   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:51.578058   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:51.630392   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:51.630427   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:51.645173   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:51.645198   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:51.715271   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:51.715297   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:51.715312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:51.798792   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:51.798831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.343631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:54.358716   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:54.358791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:54.394117   58316 cri.go:89] found id: ""
	I0520 11:49:54.394145   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.394154   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:54.394162   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:54.394219   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:54.435611   58316 cri.go:89] found id: ""
	I0520 11:49:54.435636   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.435644   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:54.435652   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:54.435711   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:54.469438   58316 cri.go:89] found id: ""
	I0520 11:49:54.469464   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.469474   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:54.469482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:54.469544   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:54.508113   58316 cri.go:89] found id: ""
	I0520 11:49:54.508143   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.508154   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:54.508161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:54.508223   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:54.544504   58316 cri.go:89] found id: ""
	I0520 11:49:54.544528   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.544536   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:54.544541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:54.544598   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:54.584691   58316 cri.go:89] found id: ""
	I0520 11:49:54.584725   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.584735   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:54.584743   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:54.584804   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:54.618505   58316 cri.go:89] found id: ""
	I0520 11:49:54.618525   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.618532   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:54.618538   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:54.618589   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:54.652670   58316 cri.go:89] found id: ""
	I0520 11:49:54.652701   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.652711   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:54.652721   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:54.652741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.692782   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:54.692808   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:54.746300   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:54.746340   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:54.760324   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:54.760357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:54.835091   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:54.835118   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:54.835130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:57.422176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:57.437369   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:57.437429   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:57.472083   58316 cri.go:89] found id: ""
	I0520 11:49:57.472119   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.472130   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:57.472137   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:57.472197   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:57.507307   58316 cri.go:89] found id: ""
	I0520 11:49:57.507332   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.507339   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:57.507345   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:57.507405   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:57.548112   58316 cri.go:89] found id: ""
	I0520 11:49:57.548144   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.548155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:57.548162   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:57.548227   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:57.585751   58316 cri.go:89] found id: ""
	I0520 11:49:57.585790   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.585801   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:57.585809   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:57.585879   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:57.626577   58316 cri.go:89] found id: ""
	I0520 11:49:57.626609   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.626620   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:57.626628   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:57.626687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:57.662402   58316 cri.go:89] found id: ""
	I0520 11:49:57.662429   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.662436   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:57.662441   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:57.662503   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:57.695393   58316 cri.go:89] found id: ""
	I0520 11:49:57.695417   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.695424   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:57.695429   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:57.695487   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:57.730011   58316 cri.go:89] found id: ""
	I0520 11:49:57.730039   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.730049   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:57.730059   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:57.730070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:57.807727   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:57.807763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:57.853562   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:57.853595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:57.909322   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:57.909357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:57.923277   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:57.923305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:57.994858   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.495176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:00.508821   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:00.508889   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:00.543001   58316 cri.go:89] found id: ""
	I0520 11:50:00.543028   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.543045   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:00.543057   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:00.543117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:00.582547   58316 cri.go:89] found id: ""
	I0520 11:50:00.582574   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.582590   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:00.582598   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:00.582659   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:00.617962   58316 cri.go:89] found id: ""
	I0520 11:50:00.617993   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.618003   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:00.618011   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:00.618073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:00.655065   58316 cri.go:89] found id: ""
	I0520 11:50:00.655099   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.655110   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:00.655117   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:00.655177   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:00.689497   58316 cri.go:89] found id: ""
	I0520 11:50:00.689521   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.689528   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:00.689533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:00.689586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:00.723710   58316 cri.go:89] found id: ""
	I0520 11:50:00.723743   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.723754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:00.723761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:00.723821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:00.757900   58316 cri.go:89] found id: ""
	I0520 11:50:00.757943   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.757950   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:00.757956   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:00.758014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:00.792196   58316 cri.go:89] found id: ""
	I0520 11:50:00.792223   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.792232   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:00.792242   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:00.792257   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:00.845759   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:00.845791   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:00.860633   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:00.860658   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:00.936030   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.936050   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:00.936061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:01.011562   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:01.011602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:03.554533   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:03.568017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:03.568086   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:03.605583   58316 cri.go:89] found id: ""
	I0520 11:50:03.605610   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.605619   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:03.605626   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:03.605683   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.640875   58316 cri.go:89] found id: ""
	I0520 11:50:03.640905   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.640915   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:03.640939   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:03.641004   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:03.676064   58316 cri.go:89] found id: ""
	I0520 11:50:03.676088   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.676096   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:03.676103   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:03.676155   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:03.710791   58316 cri.go:89] found id: ""
	I0520 11:50:03.710815   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.710823   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:03.710828   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:03.710883   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:03.746348   58316 cri.go:89] found id: ""
	I0520 11:50:03.746375   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.746386   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:03.746392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:03.746451   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:03.782220   58316 cri.go:89] found id: ""
	I0520 11:50:03.782244   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.782253   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:03.782259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:03.782319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:03.819678   58316 cri.go:89] found id: ""
	I0520 11:50:03.819712   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.819724   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:03.819731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:03.819789   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:03.856137   58316 cri.go:89] found id: ""
	I0520 11:50:03.856163   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.856172   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:03.856180   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:03.856191   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:03.914360   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:03.914391   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:03.929401   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:03.929431   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:03.999763   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:03.999882   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:03.999905   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:04.076921   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:04.076971   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:06.623577   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:06.636265   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:06.636329   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:06.670653   58316 cri.go:89] found id: ""
	I0520 11:50:06.670682   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.670692   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:06.670698   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:06.670790   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:06.703498   58316 cri.go:89] found id: ""
	I0520 11:50:06.703528   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.703537   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:06.703543   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:06.703600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:06.736579   58316 cri.go:89] found id: ""
	I0520 11:50:06.736606   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.736615   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:06.736620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:06.736687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:06.774584   58316 cri.go:89] found id: ""
	I0520 11:50:06.774608   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.774615   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:06.774625   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:06.774682   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:06.810565   58316 cri.go:89] found id: ""
	I0520 11:50:06.810591   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.810599   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:06.810604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:06.810666   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:06.845063   58316 cri.go:89] found id: ""
	I0520 11:50:06.845101   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.845113   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:06.845121   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:06.845208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:06.884560   58316 cri.go:89] found id: ""
	I0520 11:50:06.884585   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.884592   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:06.884597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:06.884650   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:06.923422   58316 cri.go:89] found id: ""
	I0520 11:50:06.923450   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.923461   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:06.923474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:06.923490   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:06.937831   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:06.937856   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:07.008877   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.008897   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:07.008907   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:07.091174   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:07.091208   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:07.128889   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:07.128922   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:09.686881   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:09.700771   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:09.700842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:09.737129   58316 cri.go:89] found id: ""
	I0520 11:50:09.737156   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.737166   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:09.737172   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:09.737233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:09.772130   58316 cri.go:89] found id: ""
	I0520 11:50:09.772164   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.772175   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:09.772182   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:09.772251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:09.811033   58316 cri.go:89] found id: ""
	I0520 11:50:09.811059   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.811066   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:09.811072   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:09.811127   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:09.849809   58316 cri.go:89] found id: ""
	I0520 11:50:09.849833   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.849841   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:09.849849   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:09.849911   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:09.888216   58316 cri.go:89] found id: ""
	I0520 11:50:09.888241   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.888251   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:09.888258   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:09.888319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:09.925704   58316 cri.go:89] found id: ""
	I0520 11:50:09.925733   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.925742   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:09.925749   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:09.925807   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:09.961823   58316 cri.go:89] found id: ""
	I0520 11:50:09.961846   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.961853   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:09.961858   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:09.961913   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:09.998256   58316 cri.go:89] found id: ""
	I0520 11:50:09.998280   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.998288   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:09.998304   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:09.998319   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:10.077998   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:10.078041   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:10.116358   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:10.116383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:10.169638   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:10.169670   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:10.184056   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:10.184082   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:10.256798   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:12.757663   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:12.772331   58316 kubeadm.go:591] duration metric: took 4m3.848225092s to restartPrimaryControlPlane
	W0520 11:50:12.772412   58316 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:50:12.772450   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:50:13.531871   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:50:13.547475   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:50:13.558690   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:50:13.568708   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:50:13.568730   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:50:13.568772   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:50:13.579016   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:50:13.579078   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:50:13.590821   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:50:13.600567   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:50:13.600614   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:50:13.610747   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.619942   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:50:13.619981   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.629831   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:50:13.638940   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:50:13.638991   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:50:13.648962   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:50:13.714817   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:50:13.714873   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:50:13.872737   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:50:13.872919   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:50:13.873055   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:50:14.078057   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:50:14.080202   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:50:14.080339   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:50:14.080431   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:50:14.080563   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:50:14.080660   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:50:14.080763   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:50:14.080842   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:50:14.081017   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:50:14.081620   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:50:14.081988   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:50:14.082494   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:50:14.082552   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:50:14.082602   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:50:14.157464   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:50:14.556498   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:50:14.754304   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:50:15.061126   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:50:15.079978   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:50:15.081069   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:50:15.081248   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:50:15.222910   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:50:15.225078   58316 out.go:204]   - Booting up control plane ...
	I0520 11:50:15.225186   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:50:15.232621   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:50:15.233678   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:50:15.234394   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:50:15.236632   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:50:55.237878   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:50:55.238408   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:50:55.238664   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:00.239814   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:00.240101   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:10.240955   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:10.241197   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:30.242152   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:30.242431   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:10.244270   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:10.244870   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:10.244909   58316 kubeadm.go:309] 
	I0520 11:52:10.245019   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:52:10.245121   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:52:10.245131   58316 kubeadm.go:309] 
	I0520 11:52:10.245213   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:52:10.245293   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:52:10.245550   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:52:10.245566   58316 kubeadm.go:309] 
	I0520 11:52:10.245819   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:52:10.245900   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:52:10.245977   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:52:10.245987   58316 kubeadm.go:309] 
	I0520 11:52:10.246241   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:52:10.246437   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:52:10.246450   58316 kubeadm.go:309] 
	I0520 11:52:10.246702   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:52:10.246915   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:52:10.247106   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:52:10.247366   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:52:10.247396   58316 kubeadm.go:309] 
	I0520 11:52:10.247656   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:10.247823   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:52:10.248110   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0520 11:52:10.248219   58316 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0520 11:52:10.248277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:10.736634   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:10.753534   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:10.766627   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:10.766653   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:10.766706   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:10.778676   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:10.778728   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:10.791073   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:10.801084   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:10.801142   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:10.811127   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.820319   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:10.820389   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.830024   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:10.839789   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:10.839850   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:10.849670   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:10.930808   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:52:10.930892   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:11.097490   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:11.097648   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:11.097813   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:11.296124   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:11.298250   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:11.298369   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:11.298456   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:11.300821   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:11.300961   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:11.301094   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:11.301178   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:11.301283   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:11.301374   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:11.301475   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:11.301589   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:11.301645   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:11.301723   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:11.441179   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:11.617104   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:11.918441   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:12.107514   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:12.132712   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:12.134501   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:12.134577   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:12.307559   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:12.309392   58316 out.go:204]   - Booting up control plane ...
	I0520 11:52:12.309529   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:12.321163   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:12.322464   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:12.323504   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:12.326643   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:52:52.328753   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:52:52.329026   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:52.329268   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:57.329722   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:57.329960   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:07.330334   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:07.330605   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:27.331241   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:27.331509   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330498   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:54:07.330756   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330768   58316 kubeadm.go:309] 
	I0520 11:54:07.330813   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:54:07.330868   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:54:07.330878   58316 kubeadm.go:309] 
	I0520 11:54:07.330924   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:54:07.330980   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:54:07.331142   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:54:07.331178   58316 kubeadm.go:309] 
	I0520 11:54:07.331298   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:54:07.331354   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:54:07.331409   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:54:07.331419   58316 kubeadm.go:309] 
	I0520 11:54:07.331564   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:54:07.331675   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:54:07.331685   58316 kubeadm.go:309] 
	I0520 11:54:07.331842   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:54:07.331971   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:54:07.332096   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:54:07.332190   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:54:07.332204   58316 kubeadm.go:309] 
	I0520 11:54:07.332486   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:54:07.332595   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:54:07.332685   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 11:54:07.332755   58316 kubeadm.go:393] duration metric: took 7m58.459127604s to StartCluster
	I0520 11:54:07.332796   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:54:07.332870   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:54:07.378602   58316 cri.go:89] found id: ""
	I0520 11:54:07.378631   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.378641   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:54:07.378648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:54:07.378714   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:54:07.415695   58316 cri.go:89] found id: ""
	I0520 11:54:07.415724   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.415732   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:54:07.415739   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:54:07.415802   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:54:07.456681   58316 cri.go:89] found id: ""
	I0520 11:54:07.456706   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.456717   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:54:07.456726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:54:07.456781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:54:07.495834   58316 cri.go:89] found id: ""
	I0520 11:54:07.495870   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.495881   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:54:07.495889   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:54:07.495949   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:54:07.533674   58316 cri.go:89] found id: ""
	I0520 11:54:07.533698   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.533705   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:54:07.533710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:54:07.533759   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:54:07.571118   58316 cri.go:89] found id: ""
	I0520 11:54:07.571141   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.571148   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:54:07.571155   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:54:07.571214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:54:07.609066   58316 cri.go:89] found id: ""
	I0520 11:54:07.609091   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.609105   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:54:07.609114   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:54:07.609176   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:54:07.653029   58316 cri.go:89] found id: ""
	I0520 11:54:07.653057   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.653068   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:54:07.653080   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:54:07.653095   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:54:07.707497   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:54:07.707533   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:54:07.728740   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:54:07.728772   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:54:07.809062   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:54:07.809088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:54:07.809104   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:54:07.911717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:54:07.911762   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0520 11:54:07.952747   58316 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0520 11:54:07.952792   58316 out.go:239] * 
	* 
	W0520 11:54:07.952855   58316 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.952881   58316 out.go:239] * 
	* 
	W0520 11:54:07.954044   58316 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:54:07.957368   58316 out.go:177] 
	W0520 11:54:07.958433   58316 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.958491   58316 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0520 11:54:07.958518   58316 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0520 11:54:07.959662   58316 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-210420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 2 (224.40773ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-210420 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-210420 logs -n 25: (1.581433302s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-854547                           | kubernetes-upgrade-854547    | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-814599             | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-120159                                        | pause-120159                 | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:39 UTC |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-793579 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | disable-driver-mounts-793579                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:41 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274976            | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-814599                  | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210420        | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-668445  | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC | 20 May 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274976                 | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210420             | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-668445       | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:44 UTC | 20 May 24 11:51 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:44:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:44:01.940145   58840 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:44:01.940398   58840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:44:01.940409   58840 out.go:304] Setting ErrFile to fd 2...
	I0520 11:44:01.940415   58840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:44:01.940633   58840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:44:01.941201   58840 out.go:298] Setting JSON to false
	I0520 11:44:01.942056   58840 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5185,"bootTime":1716200257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:44:01.942118   58840 start.go:139] virtualization: kvm guest
	I0520 11:44:01.944331   58840 out.go:177] * [default-k8s-diff-port-668445] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:44:01.945948   58840 notify.go:220] Checking for updates...
	I0520 11:44:01.945956   58840 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:44:01.947257   58840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:44:01.948525   58840 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:44:01.949881   58840 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:44:01.951156   58840 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:44:01.952736   58840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:44:01.954565   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:44:01.954932   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:44:01.954979   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:44:01.969623   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0520 11:44:01.969984   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:44:01.970444   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:44:01.970489   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:44:01.970816   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:44:01.971013   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:44:01.971212   58840 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:44:01.971493   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:44:01.971540   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:44:01.985556   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0520 11:44:01.985885   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:44:01.986365   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:44:01.986388   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:44:01.986707   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:44:01.986883   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:44:02.021778   58840 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 11:44:02.022944   58840 start.go:297] selected driver: kvm2
	I0520 11:44:02.022954   58840 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:44:02.023042   58840 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:44:02.023717   58840 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:44:02.023796   58840 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:44:02.038516   58840 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:44:02.038900   58840 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:44:02.038931   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:44:02.038942   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:44:02.038989   58840 start.go:340] cluster config:
	{Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:44:02.039097   58840 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:44:02.041645   58840 out.go:177] * Starting "default-k8s-diff-port-668445" primary control-plane node in "default-k8s-diff-port-668445" cluster
	I0520 11:44:02.042785   58840 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:44:02.042826   58840 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 11:44:02.042839   58840 cache.go:56] Caching tarball of preloaded images
	I0520 11:44:02.042917   58840 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:44:02.042929   58840 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 11:44:02.043041   58840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:44:02.043267   58840 start.go:360] acquireMachinesLock for default-k8s-diff-port-668445: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:44:06.605177   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:09.677144   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:15.757188   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:18.829221   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:24.909165   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:27.981180   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:34.061180   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:37.133203   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:43.213225   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:46.285226   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:52.365216   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:55.437121   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:01.517184   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:04.589206   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:10.669220   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:13.741264   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:19.821209   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:22.893127   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:28.973155   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:32.045217   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:41.129769   58316 start.go:364] duration metric: took 3m9.329208681s to acquireMachinesLock for "old-k8s-version-210420"
	I0520 11:45:41.129841   58316 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:45:41.129847   58316 fix.go:54] fixHost starting: 
	I0520 11:45:41.130178   58316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:45:41.130209   58316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:45:41.145527   58316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0520 11:45:41.145974   58316 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:45:41.146470   58316 main.go:141] libmachine: Using API Version  1
	I0520 11:45:41.146491   58316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:45:41.146803   58316 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:45:41.146962   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:41.147096   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetState
	I0520 11:45:41.148653   58316 fix.go:112] recreateIfNeeded on old-k8s-version-210420: state=Stopped err=<nil>
	I0520 11:45:41.148678   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	W0520 11:45:41.148835   58316 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:45:41.150515   58316 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210420" ...
	I0520 11:45:38.125195   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:41.127376   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:41.127411   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:45:41.127706   57519 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:45:41.127739   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:45:41.127954   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:45:41.129638   57519 machine.go:97] duration metric: took 4m44.663213682s to provisionDockerMachine
	I0520 11:45:41.129681   57519 fix.go:56] duration metric: took 4m44.685919857s for fixHost
	I0520 11:45:41.129692   57519 start.go:83] releasing machines lock for "no-preload-814599", held for 4m44.68594997s
	W0520 11:45:41.129717   57519 start.go:713] error starting host: provision: host is not running
	W0520 11:45:41.129807   57519 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0520 11:45:41.129817   57519 start.go:728] Will try again in 5 seconds ...
	I0520 11:45:41.151697   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .Start
	I0520 11:45:41.151882   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring networks are active...
	I0520 11:45:41.152702   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network default is active
	I0520 11:45:41.153047   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network mk-old-k8s-version-210420 is active
	I0520 11:45:41.153452   58316 main.go:141] libmachine: (old-k8s-version-210420) Getting domain xml...
	I0520 11:45:41.154219   58316 main.go:141] libmachine: (old-k8s-version-210420) Creating domain...
	I0520 11:45:46.134344   57519 start.go:360] acquireMachinesLock for no-preload-814599: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:45:42.344294   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting to get IP...
	I0520 11:45:42.345102   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.345496   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.345567   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.345490   59217 retry.go:31] will retry after 272.13676ms: waiting for machine to come up
	I0520 11:45:42.619253   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.619676   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.619706   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.619633   59217 retry.go:31] will retry after 374.178023ms: waiting for machine to come up
	I0520 11:45:42.995376   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.995793   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.995819   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.995748   59217 retry.go:31] will retry after 462.060487ms: waiting for machine to come up
	I0520 11:45:43.459411   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.459824   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.459860   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.459791   59217 retry.go:31] will retry after 479.382552ms: waiting for machine to come up
	I0520 11:45:43.940363   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.940850   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.940873   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.940802   59217 retry.go:31] will retry after 633.680965ms: waiting for machine to come up
	I0520 11:45:44.575531   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:44.575957   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:44.575985   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:44.575915   59217 retry.go:31] will retry after 755.049677ms: waiting for machine to come up
	I0520 11:45:45.332864   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:45.333335   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:45.333368   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:45.333292   59217 retry.go:31] will retry after 757.292563ms: waiting for machine to come up
	I0520 11:45:46.092128   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:46.092583   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:46.092605   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:46.092544   59217 retry.go:31] will retry after 1.150182695s: waiting for machine to come up
	I0520 11:45:47.244071   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:47.244526   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:47.244588   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:47.244474   59217 retry.go:31] will retry after 1.316447926s: waiting for machine to come up
	I0520 11:45:48.562914   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:48.563296   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:48.563326   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:48.563240   59217 retry.go:31] will retry after 1.615920757s: waiting for machine to come up
	I0520 11:45:50.180999   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:50.181361   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:50.181395   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:50.181336   59217 retry.go:31] will retry after 2.441112889s: waiting for machine to come up
	I0520 11:45:52.624374   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:52.624739   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:52.624771   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:52.624722   59217 retry.go:31] will retry after 2.505225423s: waiting for machine to come up
	I0520 11:45:55.133339   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:55.133626   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:55.133655   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:55.133596   59217 retry.go:31] will retry after 4.197262735s: waiting for machine to come up
	I0520 11:46:00.745820   58398 start.go:364] duration metric: took 3m22.67445444s to acquireMachinesLock for "embed-certs-274976"
	I0520 11:46:00.745877   58398 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:00.745887   58398 fix.go:54] fixHost starting: 
	I0520 11:46:00.746287   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:00.746320   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:00.762678   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45595
	I0520 11:46:00.763132   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:00.763654   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:00.763681   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:00.763991   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:00.764154   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:00.764308   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:00.765826   58398 fix.go:112] recreateIfNeeded on embed-certs-274976: state=Stopped err=<nil>
	I0520 11:46:00.765861   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	W0520 11:46:00.766017   58398 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:00.768362   58398 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274976" ...
	I0520 11:45:59.334216   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334678   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has current primary IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334700   58316 main.go:141] libmachine: (old-k8s-version-210420) Found IP for machine: 192.168.83.100
	I0520 11:45:59.334712   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserving static IP address...
	I0520 11:45:59.335134   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.335166   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | skip adding static IP to network mk-old-k8s-version-210420 - found existing host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"}
	I0520 11:45:59.335180   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserved static IP address: 192.168.83.100
	I0520 11:45:59.335197   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting for SSH to be available...
	I0520 11:45:59.335212   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Getting to WaitForSSH function...
	I0520 11:45:59.337206   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337476   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.337501   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337634   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH client type: external
	I0520 11:45:59.337681   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa (-rw-------)
	I0520 11:45:59.337716   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:45:59.337732   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | About to run SSH command:
	I0520 11:45:59.337743   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | exit 0
	I0520 11:45:59.460960   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | SSH cmd err, output: <nil>: 
	I0520 11:45:59.461347   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:45:59.461978   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.464416   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.464734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.464756   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.465018   58316 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:45:59.465183   58316 machine.go:94] provisionDockerMachine start ...
	I0520 11:45:59.465201   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:59.465416   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.467457   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467751   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.467782   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467892   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.468050   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468204   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468330   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.468474   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.468655   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.468665   58316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:45:59.573237   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:45:59.573264   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573558   58316 buildroot.go:166] provisioning hostname "old-k8s-version-210420"
	I0520 11:45:59.573586   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.576245   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.576770   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.576778   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576957   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577122   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577267   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.577413   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.577620   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.577639   58316 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210420 && echo "old-k8s-version-210420" | sudo tee /etc/hostname
	I0520 11:45:59.695529   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210420
	
	I0520 11:45:59.695573   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.698545   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698808   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.698835   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698964   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.699144   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699331   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699427   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.699594   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.699744   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.699767   58316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:45:59.814736   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:59.814765   58316 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:45:59.814820   58316 buildroot.go:174] setting up certificates
	I0520 11:45:59.814832   58316 provision.go:84] configureAuth start
	I0520 11:45:59.814841   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.815158   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.817591   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.817917   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.817946   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.818103   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.820205   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820474   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.820499   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820625   58316 provision.go:143] copyHostCerts
	I0520 11:45:59.820681   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:45:59.820698   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:45:59.820782   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:45:59.820887   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:45:59.820896   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:45:59.820921   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:45:59.821006   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:45:59.821015   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:45:59.821038   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:45:59.821092   58316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210420 san=[127.0.0.1 192.168.83.100 localhost minikube old-k8s-version-210420]
	I0520 11:46:00.093158   58316 provision.go:177] copyRemoteCerts
	I0520 11:46:00.093218   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:00.093251   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.095725   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096033   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.096055   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096180   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.096371   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.096549   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.096693   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.179313   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:00.203235   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 11:46:00.226670   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:00.250832   58316 provision.go:87] duration metric: took 435.987724ms to configureAuth
	I0520 11:46:00.250868   58316 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:00.251054   58316 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:46:00.251183   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.253679   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.253986   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.254020   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.254118   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.254305   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254489   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254626   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.254777   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.255000   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.255023   58316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:00.512174   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:00.512207   58316 machine.go:97] duration metric: took 1.047011014s to provisionDockerMachine
	I0520 11:46:00.512222   58316 start.go:293] postStartSetup for "old-k8s-version-210420" (driver="kvm2")
	I0520 11:46:00.512236   58316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:00.512259   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.512546   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:00.512577   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.515025   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515358   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.515386   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515512   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.515696   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.515879   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.516004   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.599956   58316 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:00.604374   58316 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:00.604399   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:00.604486   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:00.604575   58316 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:00.604704   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:00.614371   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:00.638093   58316 start.go:296] duration metric: took 125.858293ms for postStartSetup
	I0520 11:46:00.638127   58316 fix.go:56] duration metric: took 19.508279908s for fixHost
	I0520 11:46:00.638151   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.640770   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641148   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.641192   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641352   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.641539   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641702   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641844   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.641985   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.642192   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.642207   58316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:00.745686   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205560.721975623
	
	I0520 11:46:00.745708   58316 fix.go:216] guest clock: 1716205560.721975623
	I0520 11:46:00.745716   58316 fix.go:229] Guest: 2024-05-20 11:46:00.721975623 +0000 UTC Remote: 2024-05-20 11:46:00.638130573 +0000 UTC m=+208.973304879 (delta=83.84505ms)
	I0520 11:46:00.745736   58316 fix.go:200] guest clock delta is within tolerance: 83.84505ms
	I0520 11:46:00.745740   58316 start.go:83] releasing machines lock for "old-k8s-version-210420", held for 19.615924008s
	I0520 11:46:00.745762   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.746032   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:00.748741   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749123   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.749146   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749288   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749932   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.750021   58316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:00.750070   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.750200   58316 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:00.750225   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.752648   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.752923   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753019   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753045   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753156   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753244   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753273   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753343   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753420   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753519   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753582   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753650   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.753744   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753879   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	W0520 11:46:00.839633   58316 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:00.839741   58316 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:00.861801   58316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:01.011737   58316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:01.018898   58316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:01.018964   58316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:01.035600   58316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:01.035622   58316 start.go:494] detecting cgroup driver to use...
	I0520 11:46:01.035698   58316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:01.051692   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:01.065994   58316 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:01.066083   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:01.079249   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:01.093862   58316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:01.206394   58316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:01.362764   58316 docker.go:233] disabling docker service ...
	I0520 11:46:01.362842   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:01.378012   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:01.393231   58316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:01.535932   58316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:01.677217   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:01.691602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:01.716408   58316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 11:46:01.716457   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.729460   58316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:01.729541   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.740455   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.751846   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.763174   58316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:01.775353   58316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:01.786021   58316 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:01.786088   58316 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:01.801529   58316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:01.812523   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:01.927496   58316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:02.084059   58316 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:02.084137   58316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:02.089878   58316 start.go:562] Will wait 60s for crictl version
	I0520 11:46:02.089925   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:02.094050   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:02.134859   58316 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:02.134936   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.166705   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.196374   58316 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 11:46:00.769810   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Start
	I0520 11:46:00.769988   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring networks are active...
	I0520 11:46:00.770615   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring network default is active
	I0520 11:46:00.770928   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring network mk-embed-certs-274976 is active
	I0520 11:46:00.771273   58398 main.go:141] libmachine: (embed-certs-274976) Getting domain xml...
	I0520 11:46:00.771977   58398 main.go:141] libmachine: (embed-certs-274976) Creating domain...
	I0520 11:46:02.023816   58398 main.go:141] libmachine: (embed-certs-274976) Waiting to get IP...
	I0520 11:46:02.024616   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.025031   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.025111   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.025009   59335 retry.go:31] will retry after 218.274206ms: waiting for machine to come up
	I0520 11:46:02.244661   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.245135   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.245163   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.245110   59335 retry.go:31] will retry after 359.160276ms: waiting for machine to come up
	I0520 11:46:02.606589   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.607106   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.607129   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.607044   59335 retry.go:31] will retry after 462.479953ms: waiting for machine to come up
	I0520 11:46:02.197655   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:02.200774   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201186   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:02.201211   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201385   58316 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:02.205589   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:02.217983   58316 kubeadm.go:877] updating cluster {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:02.218091   58316 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:46:02.218135   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:02.265907   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:02.265970   58316 ssh_runner.go:195] Run: which lz4
	I0520 11:46:02.271466   58316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:02.276993   58316 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:02.277017   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 11:46:04.082571   58316 crio.go:462] duration metric: took 1.811145307s to copy over tarball
	I0520 11:46:04.082648   58316 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:03.071686   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:03.072242   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:03.072271   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:03.072198   59335 retry.go:31] will retry after 531.663033ms: waiting for machine to come up
	I0520 11:46:03.606121   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:03.606629   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:03.606653   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:03.606581   59335 retry.go:31] will retry after 690.866851ms: waiting for machine to come up
	I0520 11:46:04.299326   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:04.299799   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:04.299829   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:04.299750   59335 retry.go:31] will retry after 928.456543ms: waiting for machine to come up
	I0520 11:46:05.230115   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:05.230568   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:05.230586   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:05.230539   59335 retry.go:31] will retry after 761.597702ms: waiting for machine to come up
	I0520 11:46:05.993511   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:05.993897   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:05.993930   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:05.993854   59335 retry.go:31] will retry after 1.405329972s: waiting for machine to come up
	I0520 11:46:07.401706   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:07.402246   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:07.402312   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:07.402223   59335 retry.go:31] will retry after 1.717177668s: waiting for machine to come up
	I0520 11:46:06.952510   58316 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.869830998s)
	I0520 11:46:06.952533   58316 crio.go:469] duration metric: took 2.869936445s to extract the tarball
	I0520 11:46:06.952541   58316 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:06.996387   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:07.032067   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:07.032093   58316 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:46:07.032230   58316 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.032216   58316 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 11:46:07.032350   58316 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.032411   58316 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.032692   58316 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.033198   58316 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034239   58316 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.034327   58316 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 11:46:07.034404   58316 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.034442   58316 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.207835   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.209772   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 11:46:07.278794   58316 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 11:46:07.278843   58316 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.278895   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.284049   58316 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 11:46:07.284098   58316 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 11:46:07.284149   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.286345   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.289361   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 11:46:07.331714   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 11:46:07.337994   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 11:46:07.373533   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.380715   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.385188   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.390552   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.410661   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.481535   58316 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 11:46:07.481576   58316 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.481626   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494357   58316 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 11:46:07.494374   58316 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 11:46:07.494402   58316 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.494404   58316 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.507981   58316 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 11:46:07.508023   58316 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.508069   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.516898   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.517006   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.517019   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.517060   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.517248   58316 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 11:46:07.517282   58316 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.517309   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.581093   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.581200   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 11:46:07.622963   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 11:46:07.623008   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 11:46:07.623069   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 11:46:07.632151   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 11:46:07.915373   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:08.058938   58316 cache_images.go:92] duration metric: took 1.026816798s to LoadCachedImages
	W0520 11:46:08.059038   58316 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0520 11:46:08.059057   58316 kubeadm.go:928] updating node { 192.168.83.100 8443 v1.20.0 crio true true} ...
	I0520 11:46:08.059208   58316 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:08.059295   58316 ssh_runner.go:195] Run: crio config
	I0520 11:46:08.131486   58316 cni.go:84] Creating CNI manager for ""
	I0520 11:46:08.131511   58316 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:08.131532   58316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:08.131561   58316 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.100 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210420 NodeName:old-k8s-version-210420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:46:08.131724   58316 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210420"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:08.131800   58316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:46:08.143109   58316 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:08.143202   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:08.153988   58316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0520 11:46:08.172321   58316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:08.190157   58316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0520 11:46:08.208084   58316 ssh_runner.go:195] Run: grep 192.168.83.100	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:08.212211   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:08.225134   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:08.350083   58316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:08.368185   58316 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420 for IP: 192.168.83.100
	I0520 11:46:08.368202   58316 certs.go:194] generating shared ca certs ...
	I0520 11:46:08.368216   58316 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.368372   58316 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:08.368422   58316 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:08.368435   58316 certs.go:256] generating profile certs ...
	I0520 11:46:08.368552   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key
	I0520 11:46:08.368620   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9
	I0520 11:46:08.368664   58316 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key
	I0520 11:46:08.368806   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:08.368839   58316 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:08.368852   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:08.368884   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:08.368914   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:08.368967   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:08.369020   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:08.369674   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:08.402860   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:08.432963   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:08.458740   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:08.490622   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 11:46:08.529627   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:08.563064   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:08.604332   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 11:46:08.632006   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:08.656708   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:08.680552   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:08.704942   58316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:08.723543   58316 ssh_runner.go:195] Run: openssl version
	I0520 11:46:08.729776   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:08.742608   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747374   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747417   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.753504   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:08.765976   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:08.777365   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781823   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.787550   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:08.798382   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:08.810174   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814810   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.820897   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:08.832012   58316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:08.836507   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:08.842727   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:08.849208   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:08.855730   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:08.862050   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:08.867819   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:08.873633   58316 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:08.873750   58316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:08.873800   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.912922   58316 cri.go:89] found id: ""
	I0520 11:46:08.913010   58316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:08.924046   58316 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:08.924084   58316 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:08.924092   58316 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:08.924154   58316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:08.934812   58316 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:08.936218   58316 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210420" does not appear in /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:08.937232   58316 kubeconfig.go:62] /home/jenkins/minikube-integration/18925-3819/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210420" cluster setting kubeconfig missing "old-k8s-version-210420" context setting]
	I0520 11:46:08.938681   58316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.947598   58316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:08.958245   58316 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.100
	I0520 11:46:08.958280   58316 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:08.958293   58316 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:08.958349   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.996333   58316 cri.go:89] found id: ""
	I0520 11:46:08.996413   58316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:09.015359   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:09.026711   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:09.026733   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:09.026784   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:09.036886   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:09.036954   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:09.046584   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:09.056035   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:09.056095   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:09.067501   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.077064   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:09.077123   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.086820   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:09.096267   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:09.096346   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:09.106407   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:09.116143   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.240367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.968603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.251776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.363870   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.466177   58316 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:10.466284   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:10.967012   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:11.467113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:09.121909   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:09.122323   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:09.122358   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:09.122261   59335 retry.go:31] will retry after 1.49678652s: waiting for machine to come up
	I0520 11:46:10.620875   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:10.621226   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:10.621255   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:10.621184   59335 retry.go:31] will retry after 2.772139112s: waiting for machine to come up
	I0520 11:46:11.966365   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.466509   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.966500   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.467280   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.966838   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.466319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.966768   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.466848   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.966953   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:16.466890   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.396309   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:13.396705   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:13.396725   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:13.396678   59335 retry.go:31] will retry after 2.987909039s: waiting for machine to come up
	I0520 11:46:16.386007   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:16.386500   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:16.386527   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:16.386460   59335 retry.go:31] will retry after 4.351760111s: waiting for machine to come up
	I0520 11:46:16.967036   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.466679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.966631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.466439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.966783   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.967319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.466315   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.966513   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:21.467069   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.138009   58840 start.go:364] duration metric: took 2m20.094710429s to acquireMachinesLock for "default-k8s-diff-port-668445"
	I0520 11:46:22.138085   58840 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:22.138096   58840 fix.go:54] fixHost starting: 
	I0520 11:46:22.138535   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:22.138577   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:22.154345   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0520 11:46:22.154701   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:22.155172   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:22.155198   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:22.155492   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:22.155667   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:22.155834   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:22.157486   58840 fix.go:112] recreateIfNeeded on default-k8s-diff-port-668445: state=Stopped err=<nil>
	I0520 11:46:22.157525   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	W0520 11:46:22.157703   58840 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:22.159724   58840 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-668445" ...
	I0520 11:46:20.742954   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.743480   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has current primary IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.743503   58398 main.go:141] libmachine: (embed-certs-274976) Found IP for machine: 192.168.50.167
	I0520 11:46:20.743517   58398 main.go:141] libmachine: (embed-certs-274976) Reserving static IP address...
	I0520 11:46:20.743907   58398 main.go:141] libmachine: (embed-certs-274976) Reserved static IP address: 192.168.50.167
	I0520 11:46:20.743930   58398 main.go:141] libmachine: (embed-certs-274976) Waiting for SSH to be available...
	I0520 11:46:20.743955   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "embed-certs-274976", mac: "52:54:00:8b:95:29", ip: "192.168.50.167"} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.744000   58398 main.go:141] libmachine: (embed-certs-274976) DBG | skip adding static IP to network mk-embed-certs-274976 - found existing host DHCP lease matching {name: "embed-certs-274976", mac: "52:54:00:8b:95:29", ip: "192.168.50.167"}
	I0520 11:46:20.744017   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Getting to WaitForSSH function...
	I0520 11:46:20.746685   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.747071   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.747112   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.747244   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Using SSH client type: external
	I0520 11:46:20.747259   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa (-rw-------)
	I0520 11:46:20.747278   58398 main.go:141] libmachine: (embed-certs-274976) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:46:20.747287   58398 main.go:141] libmachine: (embed-certs-274976) DBG | About to run SSH command:
	I0520 11:46:20.747295   58398 main.go:141] libmachine: (embed-certs-274976) DBG | exit 0
	I0520 11:46:20.872997   58398 main.go:141] libmachine: (embed-certs-274976) DBG | SSH cmd err, output: <nil>: 
	I0520 11:46:20.873377   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetConfigRaw
	I0520 11:46:20.874039   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:20.876476   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.876827   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.876859   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.877169   58398 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/config.json ...
	I0520 11:46:20.877407   58398 machine.go:94] provisionDockerMachine start ...
	I0520 11:46:20.877432   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:20.877643   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:20.879744   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.880089   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.880113   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.880328   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:20.880529   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.880673   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.880840   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:20.881034   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:20.881263   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:20.881275   58398 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:46:20.993535   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:46:20.993572   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:20.993813   58398 buildroot.go:166] provisioning hostname "embed-certs-274976"
	I0520 11:46:20.993841   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:20.994034   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:20.996567   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.996922   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.996970   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.997092   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:20.997280   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.997437   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.997580   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:20.997734   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:20.997993   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:20.998013   58398 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274976 && echo "embed-certs-274976" | sudo tee /etc/hostname
	I0520 11:46:21.125482   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274976
	
	I0520 11:46:21.125512   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.128347   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.128717   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.128746   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.128916   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.129120   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.129296   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.129436   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.129613   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:21.129774   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:21.129789   58398 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274976/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:46:21.250287   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:46:21.250320   58398 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:46:21.250371   58398 buildroot.go:174] setting up certificates
	I0520 11:46:21.250385   58398 provision.go:84] configureAuth start
	I0520 11:46:21.250403   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:21.250726   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:21.253610   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.254038   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.254053   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.254303   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.256633   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.257007   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.257046   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.257206   58398 provision.go:143] copyHostCerts
	I0520 11:46:21.257266   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:46:21.257289   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:46:21.257369   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:46:21.257489   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:46:21.257499   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:46:21.257537   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:46:21.257617   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:46:21.257626   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:46:21.257661   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:46:21.257729   58398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274976 san=[127.0.0.1 192.168.50.167 embed-certs-274976 localhost minikube]
	I0520 11:46:21.458096   58398 provision.go:177] copyRemoteCerts
	I0520 11:46:21.458152   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:21.458178   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.460662   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.461006   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.461042   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.461205   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.461370   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.461545   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.461705   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:21.547703   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:21.574345   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 11:46:21.598843   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:21.624064   58398 provision.go:87] duration metric: took 373.664517ms to configureAuth
	I0520 11:46:21.624091   58398 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:21.624262   58398 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:21.624330   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.626996   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.627378   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.627406   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.627556   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.627711   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.627902   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.628077   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.628231   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:21.628402   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:21.628420   58398 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:21.894896   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:21.894923   58398 machine.go:97] duration metric: took 1.017499149s to provisionDockerMachine
	I0520 11:46:21.894936   58398 start.go:293] postStartSetup for "embed-certs-274976" (driver="kvm2")
	I0520 11:46:21.894948   58398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:21.894966   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:21.895279   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:21.895304   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.897866   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.898160   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.898193   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.898292   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.898484   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.898663   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.898779   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:21.984571   58398 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:21.988891   58398 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:21.988918   58398 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:21.988995   58398 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:21.989129   58398 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:21.989257   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:21.999219   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:22.023032   58398 start.go:296] duration metric: took 128.082095ms for postStartSetup
	I0520 11:46:22.023076   58398 fix.go:56] duration metric: took 21.277188857s for fixHost
	I0520 11:46:22.023094   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.025971   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.026362   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.026393   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.026536   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.026725   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.026867   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.027017   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.027246   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:22.027424   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:22.027437   58398 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:22.137813   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205582.113779638
	
	I0520 11:46:22.137840   58398 fix.go:216] guest clock: 1716205582.113779638
	I0520 11:46:22.137850   58398 fix.go:229] Guest: 2024-05-20 11:46:22.113779638 +0000 UTC Remote: 2024-05-20 11:46:22.023079043 +0000 UTC m=+224.083510648 (delta=90.700595ms)
	I0520 11:46:22.137898   58398 fix.go:200] guest clock delta is within tolerance: 90.700595ms
	I0520 11:46:22.137906   58398 start.go:83] releasing machines lock for "embed-certs-274976", held for 21.39205527s
	I0520 11:46:22.137943   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.138222   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:22.140951   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.141358   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.141388   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.141558   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142024   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142187   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142245   58398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:22.142291   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.142395   58398 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:22.142417   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.144896   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145116   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145386   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.145419   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145490   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.145518   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145525   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.145647   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.145703   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.145827   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.145849   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.146019   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.146022   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:22.146175   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	W0520 11:46:22.226345   58398 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:22.226421   58398 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:22.248678   58398 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:22.404298   58398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:22.410174   58398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:22.410249   58398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:22.426286   58398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:22.426311   58398 start.go:494] detecting cgroup driver to use...
	I0520 11:46:22.426369   58398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:22.441746   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:22.455445   58398 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:22.455515   58398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:22.469441   58398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:22.485963   58398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:22.604443   58398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:22.758542   58398 docker.go:233] disabling docker service ...
	I0520 11:46:22.758628   58398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:22.773371   58398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:22.786871   58398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:22.940022   58398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:23.076857   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:23.091980   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:23.111380   58398 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:46:23.111434   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.122532   58398 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:23.122597   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.134355   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.145484   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.156776   58398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:23.169017   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.180681   58398 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.205664   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.217267   58398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:23.228490   58398 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:23.228555   58398 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:23.243578   58398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:23.255394   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:23.387407   58398 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:23.537543   58398 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:23.537613   58398 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:23.542886   58398 start.go:562] Will wait 60s for crictl version
	I0520 11:46:23.542934   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:46:23.546740   58398 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:23.587662   58398 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:23.587746   58398 ssh_runner.go:195] Run: crio --version
	I0520 11:46:23.615735   58398 ssh_runner.go:195] Run: crio --version
	I0520 11:46:23.648727   58398 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:46:21.966776   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.466411   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.967037   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.467218   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.966606   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.466990   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.966805   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.966829   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:26.466968   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.161646   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Start
	I0520 11:46:22.161813   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring networks are active...
	I0520 11:46:22.162609   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring network default is active
	I0520 11:46:22.163020   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring network mk-default-k8s-diff-port-668445 is active
	I0520 11:46:22.163509   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Getting domain xml...
	I0520 11:46:22.164146   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Creating domain...
	I0520 11:46:23.456104   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting to get IP...
	I0520 11:46:23.457157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.457703   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.457735   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:23.457652   59511 retry.go:31] will retry after 262.047261ms: waiting for machine to come up
	I0520 11:46:23.721329   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.721756   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.721815   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:23.721742   59511 retry.go:31] will retry after 301.868596ms: waiting for machine to come up
	I0520 11:46:24.025477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.026042   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.026072   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:24.026006   59511 retry.go:31] will retry after 431.470276ms: waiting for machine to come up
	I0520 11:46:24.458962   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.459477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.459564   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:24.459450   59511 retry.go:31] will retry after 566.093902ms: waiting for machine to come up
	I0520 11:46:25.027215   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.027787   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.027812   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:25.027729   59511 retry.go:31] will retry after 471.2178ms: waiting for machine to come up
	I0520 11:46:25.500575   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.501067   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.501106   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:25.501010   59511 retry.go:31] will retry after 682.542082ms: waiting for machine to come up
	I0520 11:46:26.184886   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:26.185367   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:26.185400   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:26.185315   59511 retry.go:31] will retry after 973.33325ms: waiting for machine to come up
	I0520 11:46:23.650320   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:23.653452   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:23.653919   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:23.653962   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:23.654259   58398 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:23.658592   58398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:23.671310   58398 kubeadm.go:877] updating cluster {Name:embed-certs-274976 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:23.671460   58398 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:46:23.671523   58398 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:23.715873   58398 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:46:23.715947   58398 ssh_runner.go:195] Run: which lz4
	I0520 11:46:23.720362   58398 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:23.724821   58398 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:23.724852   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:46:25.254824   58398 crio.go:462] duration metric: took 1.534501958s to copy over tarball
	I0520 11:46:25.254937   58398 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:27.527739   58398 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.272764169s)
	I0520 11:46:27.527773   58398 crio.go:469] duration metric: took 2.272914971s to extract the tarball
	I0520 11:46:27.527780   58398 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:27.564738   58398 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:27.613196   58398 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:46:27.613220   58398 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:46:27.613229   58398 kubeadm.go:928] updating node { 192.168.50.167 8443 v1.30.1 crio true true} ...
	I0520 11:46:27.613321   58398 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:27.613381   58398 ssh_runner.go:195] Run: crio config
	I0520 11:46:27.669907   58398 cni.go:84] Creating CNI manager for ""
	I0520 11:46:27.669931   58398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:27.669948   58398 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:27.669969   58398 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.167 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274976 NodeName:embed-certs-274976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:46:27.670117   58398 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274976"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:27.670180   58398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:46:27.680414   58398 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:27.680473   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:27.690431   58398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0520 11:46:27.707625   58398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:27.724120   58398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0520 11:46:27.744218   58398 ssh_runner.go:195] Run: grep 192.168.50.167	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:27.748173   58398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:27.760752   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:27.891061   58398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:27.909326   58398 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976 for IP: 192.168.50.167
	I0520 11:46:27.909354   58398 certs.go:194] generating shared ca certs ...
	I0520 11:46:27.909387   58398 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:27.909574   58398 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:27.909637   58398 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:27.909650   58398 certs.go:256] generating profile certs ...
	I0520 11:46:27.909754   58398 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/client.key
	I0520 11:46:27.909843   58398 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.key.d6be4a1f
	I0520 11:46:27.909922   58398 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.key
	I0520 11:46:27.910072   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:27.910127   58398 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:27.910146   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:27.910183   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:27.910208   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:27.910229   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:27.910278   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:27.911030   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:27.949250   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:26.966832   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.466585   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.967225   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.466286   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.966679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.466425   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.966440   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.467175   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.966457   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.466596   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.160027   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:27.160492   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:27.160515   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:27.160435   59511 retry.go:31] will retry after 1.263563226s: waiting for machine to come up
	I0520 11:46:28.426016   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:28.426503   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:28.426555   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:28.426463   59511 retry.go:31] will retry after 1.407211588s: waiting for machine to come up
	I0520 11:46:29.835680   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:29.836101   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:29.836121   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:29.836058   59511 retry.go:31] will retry after 2.320334078s: waiting for machine to come up
	I0520 11:46:27.987645   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:28.036652   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:28.083998   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0520 11:46:28.116104   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:28.154695   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:28.181013   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:46:28.213382   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:28.243550   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:28.268537   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:28.292856   58398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:28.310801   58398 ssh_runner.go:195] Run: openssl version
	I0520 11:46:28.316724   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:28.327461   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.332286   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.332345   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.339005   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:28.351192   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:28.362639   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.367566   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.367623   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.373275   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:28.384549   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:28.395600   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.400136   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.400188   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.405914   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:28.417533   58398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:28.422327   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:28.428746   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:28.434840   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:28.441227   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:28.447310   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:28.453450   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:28.462637   58398 kubeadm.go:391] StartCluster: {Name:embed-certs-274976 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:28.462746   58398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:28.462828   58398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:28.501759   58398 cri.go:89] found id: ""
	I0520 11:46:28.501826   58398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:28.512536   58398 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:28.512554   58398 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:28.512559   58398 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:28.512603   58398 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:28.522288   58398 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:28.523266   58398 kubeconfig.go:125] found "embed-certs-274976" server: "https://192.168.50.167:8443"
	I0520 11:46:28.525188   58398 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:28.535056   58398 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.167
	I0520 11:46:28.535092   58398 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:28.535104   58398 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:28.535157   58398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:28.573317   58398 cri.go:89] found id: ""
	I0520 11:46:28.573429   58398 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:28.593111   58398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:28.603312   58398 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:28.603345   58398 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:28.603398   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:28.612794   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:28.612870   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:28.622819   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:28.634550   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:28.634614   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:28.644568   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:28.654918   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:28.654980   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:28.665581   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:28.677090   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:28.677167   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:28.687152   58398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:28.697505   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:28.833183   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:29.963435   58398 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.130211655s)
	I0520 11:46:29.963486   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.185882   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.248946   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.334924   58398 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:30.335020   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.835637   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.336028   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.354314   58398 api_server.go:72] duration metric: took 1.01937754s to wait for apiserver process to appear ...
	I0520 11:46:31.354343   58398 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:46:31.354365   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:31.354916   58398 api_server.go:269] stopped: https://192.168.50.167:8443/healthz: Get "https://192.168.50.167:8443/healthz": dial tcp 192.168.50.167:8443: connect: connection refused
	I0520 11:46:31.854629   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.299609   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.299653   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.299669   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.370279   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.370304   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.370317   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.378426   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.378448   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.855034   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.860164   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:34.860205   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:35.354583   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:35.362461   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:35.362489   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:35.854836   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:35.859322   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I0520 11:46:35.867050   58398 api_server.go:141] control plane version: v1.30.1
	I0520 11:46:35.867077   58398 api_server.go:131] duration metric: took 4.512726716s to wait for apiserver health ...
	I0520 11:46:35.867088   58398 cni.go:84] Creating CNI manager for ""
	I0520 11:46:35.867096   58398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:35.869427   58398 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:46:31.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.467311   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.966392   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.967113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.467393   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.966336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.466951   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.967302   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:36.467339   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.158195   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:32.158685   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:32.158712   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:32.158637   59511 retry.go:31] will retry after 2.733920459s: waiting for machine to come up
	I0520 11:46:34.893892   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:34.894436   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:34.894480   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:34.894374   59511 retry.go:31] will retry after 3.119057685s: waiting for machine to come up
	I0520 11:46:35.871058   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:46:35.898211   58398 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:46:35.942295   58398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:46:35.954120   58398 system_pods.go:59] 8 kube-system pods found
	I0520 11:46:35.954156   58398 system_pods.go:61] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:46:35.954171   58398 system_pods.go:61] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:46:35.954181   58398 system_pods.go:61] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:46:35.954194   58398 system_pods.go:61] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:46:35.954204   58398 system_pods.go:61] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0520 11:46:35.954213   58398 system_pods.go:61] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:46:35.954220   58398 system_pods.go:61] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:46:35.954229   58398 system_pods.go:61] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 11:46:35.954239   58398 system_pods.go:74] duration metric: took 11.924168ms to wait for pod list to return data ...
	I0520 11:46:35.954249   58398 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:46:35.957627   58398 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:46:35.957664   58398 node_conditions.go:123] node cpu capacity is 2
	I0520 11:46:35.957679   58398 node_conditions.go:105] duration metric: took 3.421729ms to run NodePressure ...
	I0520 11:46:35.957700   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:36.235314   58398 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:46:36.241075   58398 kubeadm.go:733] kubelet initialised
	I0520 11:46:36.241101   58398 kubeadm.go:734] duration metric: took 5.759765ms waiting for restarted kubelet to initialise ...
	I0520 11:46:36.241110   58398 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:36.248030   58398 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.254230   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.254263   58398 pod_ready.go:81] duration metric: took 6.20401ms for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.254274   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.254281   58398 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.259348   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "etcd-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.259379   58398 pod_ready.go:81] duration metric: took 5.088735ms for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.259392   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "etcd-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.259402   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.266963   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.266997   58398 pod_ready.go:81] duration metric: took 7.580054ms for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.267010   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.267020   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.346321   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.346353   58398 pod_ready.go:81] duration metric: took 79.314801ms for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.346363   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.346371   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.746645   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-proxy-6n9d5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.746674   58398 pod_ready.go:81] duration metric: took 400.291751ms for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.746686   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-proxy-6n9d5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.746692   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:37.146413   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.146443   58398 pod_ready.go:81] duration metric: took 399.744378ms for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:37.146454   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.146465   58398 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:37.545943   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.545966   58398 pod_ready.go:81] duration metric: took 399.491619ms for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:37.545974   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.545981   58398 pod_ready.go:38] duration metric: took 1.304861628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:37.545997   58398 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:46:37.558076   58398 ops.go:34] apiserver oom_adj: -16
	I0520 11:46:37.558097   58398 kubeadm.go:591] duration metric: took 9.045531679s to restartPrimaryControlPlane
	I0520 11:46:37.558106   58398 kubeadm.go:393] duration metric: took 9.095477048s to StartCluster
	I0520 11:46:37.558125   58398 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:37.558199   58398 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:37.559701   58398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:37.559920   58398 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:46:37.561751   58398 out.go:177] * Verifying Kubernetes components...
	I0520 11:46:37.560038   58398 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:46:37.560143   58398 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:37.563096   58398 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274976"
	I0520 11:46:37.563106   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:37.563124   58398 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274976"
	W0520 11:46:37.563133   58398 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:46:37.563156   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.563164   58398 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274976"
	I0520 11:46:37.563179   58398 addons.go:69] Setting metrics-server=true in profile "embed-certs-274976"
	I0520 11:46:37.563193   58398 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274976"
	I0520 11:46:37.563204   58398 addons.go:234] Setting addon metrics-server=true in "embed-certs-274976"
	W0520 11:46:37.563215   58398 addons.go:243] addon metrics-server should already be in state true
	I0520 11:46:37.563241   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.563516   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563558   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.563575   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563600   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.563614   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563638   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.577924   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0520 11:46:37.578096   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0520 11:46:37.578120   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0520 11:46:37.578322   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578480   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578527   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578851   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.578879   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.578982   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.579003   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.579019   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.579038   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.579202   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579342   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579364   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579385   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.579824   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.579867   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.579871   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.579895   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.582214   58398 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274976"
	W0520 11:46:37.582230   58398 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:46:37.582249   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.582511   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.582542   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.594308   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0520 11:46:37.594672   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.595047   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.595067   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.595525   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.595551   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0520 11:46:37.595709   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.596037   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.596545   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.596568   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.596980   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.597579   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.597599   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.597802   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.599963   58398 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:46:37.598457   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I0520 11:46:37.601485   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:46:37.601505   58398 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:46:37.601527   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.601793   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.602219   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.602238   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.602502   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.602688   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.604122   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.604188   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.605753   58398 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:37.604566   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.604874   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.607190   58398 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:37.607205   58398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:46:37.607221   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.607240   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.607263   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.607414   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.607729   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.610088   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.610462   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.610483   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.610738   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.610912   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.611072   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.611212   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.613655   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37269
	I0520 11:46:37.613938   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.614386   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.614403   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.614653   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.614826   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.616151   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.616331   58398 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:37.616345   58398 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:46:37.616362   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.619627   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.620050   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.620072   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.620214   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.620365   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.620480   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.620660   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.734680   58398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:37.751098   58398 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274976" to be "Ready" ...
	I0520 11:46:37.842846   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:46:37.842867   58398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:46:37.853894   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:37.862145   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:46:37.862167   58398 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:46:37.883656   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:37.884046   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:37.884061   58398 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:46:37.941086   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:38.187392   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.187415   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.187819   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.187841   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.187855   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.187861   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.187822   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.188095   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.188115   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.193695   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.193710   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.193926   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.193942   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.813454   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.813472   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.813794   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.813813   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.813821   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.813843   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.813894   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.814140   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.814154   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897046   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.897078   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.897406   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.897453   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.897469   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897494   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.897505   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.897755   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.897759   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.897772   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897784   58398 addons.go:470] Verifying addon metrics-server=true in "embed-certs-274976"
	I0520 11:46:38.899910   58398 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0520 11:46:36.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.467251   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.967153   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.967236   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.466627   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.966560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.466358   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.967165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:41.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.014993   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:38.015405   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:38.015458   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:38.015373   59511 retry.go:31] will retry after 4.505016985s: waiting for machine to come up
	I0520 11:46:38.901272   58398 addons.go:505] duration metric: took 1.341257876s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0520 11:46:39.754549   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:41.758717   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:43.865868   57519 start.go:364] duration metric: took 57.7314587s to acquireMachinesLock for "no-preload-814599"
	I0520 11:46:43.865920   57519 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:43.865931   57519 fix.go:54] fixHost starting: 
	I0520 11:46:43.866311   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:43.866347   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:43.885634   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0520 11:46:43.886068   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:43.886620   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:46:43.886659   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:43.887013   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:43.887200   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:46:43.887337   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:46:43.888902   57519 fix.go:112] recreateIfNeeded on no-preload-814599: state=Stopped err=<nil>
	I0520 11:46:43.888958   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	W0520 11:46:43.889108   57519 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:43.891052   57519 out.go:177] * Restarting existing kvm2 VM for "no-preload-814599" ...
	I0520 11:46:42.521630   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.522114   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Found IP for machine: 192.168.61.221
	I0520 11:46:42.522140   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Reserving static IP address...
	I0520 11:46:42.522157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has current primary IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.522534   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Reserved static IP address: 192.168.61.221
	I0520 11:46:42.522556   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for SSH to be available...
	I0520 11:46:42.522571   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-668445", mac: "52:54:00:55:ba:46", ip: "192.168.61.221"} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.522593   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | skip adding static IP to network mk-default-k8s-diff-port-668445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-668445", mac: "52:54:00:55:ba:46", ip: "192.168.61.221"}
	I0520 11:46:42.522613   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Getting to WaitForSSH function...
	I0520 11:46:42.524812   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.525199   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.525221   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.525362   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Using SSH client type: external
	I0520 11:46:42.525391   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa (-rw-------)
	I0520 11:46:42.525427   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:46:42.525443   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | About to run SSH command:
	I0520 11:46:42.525455   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | exit 0
	I0520 11:46:42.648689   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | SSH cmd err, output: <nil>: 
	I0520 11:46:42.649122   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetConfigRaw
	I0520 11:46:42.649745   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:42.652220   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.652571   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.652596   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.652821   58840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:46:42.653026   58840 machine.go:94] provisionDockerMachine start ...
	I0520 11:46:42.653046   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:42.653235   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.655453   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.655758   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.655777   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.655923   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.656089   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.656253   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.656384   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.656551   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.656729   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.656739   58840 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:46:42.761195   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:46:42.761225   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:42.761450   58840 buildroot.go:166] provisioning hostname "default-k8s-diff-port-668445"
	I0520 11:46:42.761477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:42.761666   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.764340   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.764670   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.764695   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.764801   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.764973   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.765131   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.765285   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.765472   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.765634   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.765647   58840 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-668445 && echo "default-k8s-diff-port-668445" | sudo tee /etc/hostname
	I0520 11:46:42.883179   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-668445
	
	I0520 11:46:42.883204   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.885913   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.886302   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.886347   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.886487   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.886699   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.886884   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.887040   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.887238   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.887419   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.887444   58840 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-668445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-668445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-668445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:46:43.001769   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:46:43.001792   58840 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:46:43.001825   58840 buildroot.go:174] setting up certificates
	I0520 11:46:43.001834   58840 provision.go:84] configureAuth start
	I0520 11:46:43.001842   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:43.002064   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:43.004606   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.004897   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.004944   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.005087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.006929   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.007270   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.007296   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.007386   58840 provision.go:143] copyHostCerts
	I0520 11:46:43.007461   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:46:43.007470   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:46:43.007523   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:46:43.007608   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:46:43.007617   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:46:43.007637   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:46:43.007686   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:46:43.007693   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:46:43.007709   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:46:43.007753   58840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-668445 san=[127.0.0.1 192.168.61.221 default-k8s-diff-port-668445 localhost minikube]
	I0520 11:46:43.165780   58840 provision.go:177] copyRemoteCerts
	I0520 11:46:43.165835   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:43.165860   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.168782   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.169136   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.169181   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.169343   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.169575   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.169717   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.169892   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.252175   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:43.280313   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0520 11:46:43.306901   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:43.331191   58840 provision.go:87] duration metric: took 329.344445ms to configureAuth
	I0520 11:46:43.331227   58840 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:43.331478   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:43.331576   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.334173   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.334573   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.334603   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.334770   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.334961   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.335108   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.335236   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.335381   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:43.335593   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:43.335615   58840 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:43.630180   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:43.630213   58840 machine.go:97] duration metric: took 977.172725ms to provisionDockerMachine
	I0520 11:46:43.630229   58840 start.go:293] postStartSetup for "default-k8s-diff-port-668445" (driver="kvm2")
	I0520 11:46:43.630244   58840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:43.630270   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.630597   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:43.630635   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.633307   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.633650   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.633680   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.633798   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.633976   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.634141   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.634264   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.719350   58840 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:43.723320   58840 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:43.723339   58840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:43.723396   58840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:43.723478   58840 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:43.723564   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:43.732422   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:43.757138   58840 start.go:296] duration metric: took 126.894206ms for postStartSetup
	I0520 11:46:43.757179   58840 fix.go:56] duration metric: took 21.619082215s for fixHost
	I0520 11:46:43.757202   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.759486   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.759860   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.759887   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.760065   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.760248   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.760425   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.760527   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.760654   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:43.760824   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:43.760839   58840 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:43.865691   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205603.841593335
	
	I0520 11:46:43.865713   58840 fix.go:216] guest clock: 1716205603.841593335
	I0520 11:46:43.865722   58840 fix.go:229] Guest: 2024-05-20 11:46:43.841593335 +0000 UTC Remote: 2024-05-20 11:46:43.757183475 +0000 UTC m=+161.848941663 (delta=84.40986ms)
	I0520 11:46:43.865777   58840 fix.go:200] guest clock delta is within tolerance: 84.40986ms
	I0520 11:46:43.865790   58840 start.go:83] releasing machines lock for "default-k8s-diff-port-668445", held for 21.727726024s
	I0520 11:46:43.865824   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.866087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:43.868681   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.869054   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.869076   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.869256   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.869792   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.869961   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.870035   58840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:43.870091   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.870174   58840 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:43.870195   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.872817   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873059   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873146   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.873253   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873487   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.873496   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.873537   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873684   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.873687   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.873861   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.873866   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.874033   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.874046   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.874234   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	W0520 11:46:43.973840   58840 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:43.973913   58840 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:43.980706   58840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:44.132625   58840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:44.140558   58840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:44.140629   58840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:44.158922   58840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:44.158946   58840 start.go:494] detecting cgroup driver to use...
	I0520 11:46:44.159006   58840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:44.179206   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:44.194183   58840 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:44.194254   58840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:44.208701   58840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:44.223086   58840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:44.342811   58840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:44.491358   58840 docker.go:233] disabling docker service ...
	I0520 11:46:44.491428   58840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:44.509165   58840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:44.526088   58840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:44.700452   58840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:44.833129   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:44.849090   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:44.870148   58840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:46:44.870222   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.880951   58840 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:44.881016   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.895038   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.908245   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.918844   58840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:44.929838   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.940446   58840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.960350   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.974587   58840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:44.985689   58840 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:44.985748   58840 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:44.999929   58840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:45.010732   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:45.140038   58840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:45.291656   58840 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:45.291728   58840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:45.297351   58840 start.go:562] Will wait 60s for crictl version
	I0520 11:46:45.297412   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:46:45.301297   58840 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:45.343635   58840 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:45.343703   58840 ssh_runner.go:195] Run: crio --version
	I0520 11:46:45.373914   58840 ssh_runner.go:195] Run: crio --version
	I0520 11:46:45.404596   58840 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:46:43.892523   57519 main.go:141] libmachine: (no-preload-814599) Calling .Start
	I0520 11:46:43.892686   57519 main.go:141] libmachine: (no-preload-814599) Ensuring networks are active...
	I0520 11:46:43.893370   57519 main.go:141] libmachine: (no-preload-814599) Ensuring network default is active
	I0520 11:46:43.893729   57519 main.go:141] libmachine: (no-preload-814599) Ensuring network mk-no-preload-814599 is active
	I0520 11:46:43.894105   57519 main.go:141] libmachine: (no-preload-814599) Getting domain xml...
	I0520 11:46:43.894847   57519 main.go:141] libmachine: (no-preload-814599) Creating domain...
	I0520 11:46:45.229992   57519 main.go:141] libmachine: (no-preload-814599) Waiting to get IP...
	I0520 11:46:45.230981   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.231451   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.231518   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.231438   59753 retry.go:31] will retry after 187.539159ms: waiting for machine to come up
	I0520 11:46:45.420854   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.421369   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.421393   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.421328   59753 retry.go:31] will retry after 327.709711ms: waiting for machine to come up
	I0520 11:46:45.750799   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.751384   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.751414   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.751334   59753 retry.go:31] will retry after 445.096849ms: waiting for machine to come up
	I0520 11:46:46.197899   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:46.198567   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:46.198598   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:46.198536   59753 retry.go:31] will retry after 561.029629ms: waiting for machine to come up
	I0520 11:46:41.967129   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.467212   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.966637   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.466403   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.966371   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.466743   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.967293   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.467205   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:46.466728   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.405909   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:45.408865   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:45.409399   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:45.409429   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:45.409655   58840 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:45.413815   58840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:45.426264   58840 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:45.426366   58840 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:46:45.426405   58840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:45.468938   58840 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:46:45.469021   58840 ssh_runner.go:195] Run: which lz4
	I0520 11:46:45.474957   58840 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:45.481006   58840 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:45.481039   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:46:44.255728   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:44.756897   58398 node_ready.go:49] node "embed-certs-274976" has status "Ready":"True"
	I0520 11:46:44.756963   58398 node_ready.go:38] duration metric: took 7.005828991s for node "embed-certs-274976" to be "Ready" ...
	I0520 11:46:44.756978   58398 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:44.767584   58398 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:44.774049   58398 pod_ready.go:92] pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:44.774078   58398 pod_ready.go:81] duration metric: took 6.468023ms for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:44.774099   58398 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.795658   58398 pod_ready.go:92] pod "etcd-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:46.795691   58398 pod_ready.go:81] duration metric: took 2.021582833s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.795705   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.810711   58398 pod_ready.go:92] pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:46.810739   58398 pod_ready.go:81] duration metric: took 15.025215ms for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.810751   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.760855   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:46.761534   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:46.761579   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:46.761484   59753 retry.go:31] will retry after 497.075791ms: waiting for machine to come up
	I0520 11:46:47.260059   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:47.260542   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:47.260580   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:47.260479   59753 retry.go:31] will retry after 910.980097ms: waiting for machine to come up
	I0520 11:46:48.173677   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:48.174161   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:48.174189   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:48.174108   59753 retry.go:31] will retry after 931.969789ms: waiting for machine to come up
	I0520 11:46:49.107672   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:49.108241   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:49.108271   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:49.108194   59753 retry.go:31] will retry after 1.075042352s: waiting for machine to come up
	I0520 11:46:50.184518   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:50.185078   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:50.185108   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:50.185025   59753 retry.go:31] will retry after 1.174646184s: waiting for machine to come up
	I0520 11:46:46.967320   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.466459   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.966641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.467404   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.966845   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.966601   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.466616   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.967389   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:51.466668   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.010637   58840 crio.go:462] duration metric: took 1.535717978s to copy over tarball
	I0520 11:46:47.010719   58840 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:49.308712   58840 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.297962166s)
	I0520 11:46:49.308747   58840 crio.go:469] duration metric: took 2.298080001s to extract the tarball
	I0520 11:46:49.308756   58840 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:49.346851   58840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:49.393090   58840 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:46:49.393118   58840 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:46:49.393126   58840 kubeadm.go:928] updating node { 192.168.61.221 8444 v1.30.1 crio true true} ...
	I0520 11:46:49.393247   58840 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-668445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:49.393326   58840 ssh_runner.go:195] Run: crio config
	I0520 11:46:49.438553   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:46:49.438575   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:49.438592   58840 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:49.438618   58840 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.221 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-668445 NodeName:default-k8s-diff-port-668445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:46:49.438792   58840 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-668445"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:49.438876   58840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:46:49.449415   58840 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:49.449487   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:49.460951   58840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0520 11:46:49.479654   58840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:49.497494   58840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0520 11:46:49.518474   58840 ssh_runner.go:195] Run: grep 192.168.61.221	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:49.522598   58840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:49.537237   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:49.686022   58840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:49.712850   58840 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445 for IP: 192.168.61.221
	I0520 11:46:49.712871   58840 certs.go:194] generating shared ca certs ...
	I0520 11:46:49.712885   58840 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:49.713065   58840 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:49.713110   58840 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:49.713123   58840 certs.go:256] generating profile certs ...
	I0520 11:46:49.713261   58840 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/client.key
	I0520 11:46:49.713346   58840 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.key.50005da2
	I0520 11:46:49.713390   58840 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.key
	I0520 11:46:49.713548   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:49.713583   58840 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:49.713597   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:49.713629   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:49.713656   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:49.713686   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:49.713744   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:49.714370   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:49.751367   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:49.787162   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:49.822660   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:49.863401   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 11:46:49.895295   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:49.928089   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:49.952687   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:46:49.982562   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:50.007234   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:50.031099   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:50.055146   58840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:50.073066   58840 ssh_runner.go:195] Run: openssl version
	I0520 11:46:50.080951   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:50.092134   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.097436   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.097496   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.103714   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:50.114321   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:50.125104   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.131225   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.131276   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.139113   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:50.153959   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:50.165441   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.170302   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.170349   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.176388   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:50.188372   58840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:50.193168   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:50.199298   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:50.208100   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:50.214704   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:50.220565   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:50.226463   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:50.232400   58840 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:50.232472   58840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:50.232532   58840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:50.271977   58840 cri.go:89] found id: ""
	I0520 11:46:50.272070   58840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:50.283676   58840 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:50.283699   58840 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:50.283706   58840 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:50.283753   58840 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:50.296808   58840 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:50.298406   58840 kubeconfig.go:125] found "default-k8s-diff-port-668445" server: "https://192.168.61.221:8444"
	I0520 11:46:50.300563   58840 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:50.310940   58840 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.221
	I0520 11:46:50.310964   58840 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:50.310975   58840 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:50.311010   58840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:50.361321   58840 cri.go:89] found id: ""
	I0520 11:46:50.361400   58840 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:50.382699   58840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:50.397635   58840 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:50.397657   58840 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:50.397707   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0520 11:46:50.407303   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:50.407364   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:50.418331   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0520 11:46:50.427889   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:50.427938   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:50.437649   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0520 11:46:50.446810   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:50.446861   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:50.456428   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0520 11:46:50.465444   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:50.465512   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:50.478585   58840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:50.489053   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:50.607045   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.346500   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.594241   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.661179   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.751253   58840 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:51.751331   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.318166   58398 pod_ready.go:92] pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.318193   58398 pod_ready.go:81] duration metric: took 1.507434439s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.318207   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.323814   58398 pod_ready.go:92] pod "kube-proxy-6n9d5" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.323850   58398 pod_ready.go:81] duration metric: took 5.632938ms for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.323865   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.356208   58398 pod_ready.go:92] pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.356235   58398 pod_ready.go:81] duration metric: took 32.360735ms for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.356247   58398 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:50.363910   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:52.364530   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:51.360905   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:51.482451   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:51.482493   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:51.361293   59753 retry.go:31] will retry after 1.455979778s: waiting for machine to come up
	I0520 11:46:52.818580   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:52.819134   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:52.819162   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:52.819073   59753 retry.go:31] will retry after 2.771639261s: waiting for machine to come up
	I0520 11:46:55.593581   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:55.594138   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:55.594167   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:55.594089   59753 retry.go:31] will retry after 2.881341208s: waiting for machine to come up
	I0520 11:46:51.967150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.466994   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.967190   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.467334   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.967039   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.466506   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.966828   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.967263   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:56.466949   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.252101   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.752498   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.794795   58840 api_server.go:72] duration metric: took 1.043540264s to wait for apiserver process to appear ...
	I0520 11:46:52.794823   58840 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:46:52.794847   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:52.795569   58840 api_server.go:269] stopped: https://192.168.61.221:8444/healthz: Get "https://192.168.61.221:8444/healthz": dial tcp 192.168.61.221:8444: connect: connection refused
	I0520 11:46:53.294972   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.015833   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:56.015872   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:56.015889   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.053674   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:56.053705   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:56.294988   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.300440   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:56.300474   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:56.795720   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.813192   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:56.813221   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:57.295045   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:57.298962   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:57.298994   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:57.795590   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:57.800795   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 200:
	ok
	I0520 11:46:57.809060   58840 api_server.go:141] control plane version: v1.30.1
	I0520 11:46:57.809084   58840 api_server.go:131] duration metric: took 5.014253865s to wait for apiserver health ...
	I0520 11:46:57.809092   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:46:57.809098   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:57.811020   58840 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:46:54.864263   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:57.363797   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:57.812202   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:46:57.824260   58840 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:46:57.843189   58840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:46:57.851978   58840 system_pods.go:59] 8 kube-system pods found
	I0520 11:46:57.852012   58840 system_pods.go:61] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:46:57.852029   58840 system_pods.go:61] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:46:57.852035   58840 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:46:57.852042   58840 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:46:57.852046   58840 system_pods.go:61] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:46:57.852055   58840 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:46:57.852063   58840 system_pods.go:61] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:46:57.852067   58840 system_pods.go:61] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:46:57.852073   58840 system_pods.go:74] duration metric: took 8.872046ms to wait for pod list to return data ...
	I0520 11:46:57.852084   58840 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:46:57.855020   58840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:46:57.855043   58840 node_conditions.go:123] node cpu capacity is 2
	I0520 11:46:57.855052   58840 node_conditions.go:105] duration metric: took 2.95984ms to run NodePressure ...
	I0520 11:46:57.855065   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:58.124410   58840 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:46:58.129201   58840 kubeadm.go:733] kubelet initialised
	I0520 11:46:58.129222   58840 kubeadm.go:734] duration metric: took 4.789292ms waiting for restarted kubelet to initialise ...
	I0520 11:46:58.129229   58840 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:58.134883   58840 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.139522   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.139557   58840 pod_ready.go:81] duration metric: took 4.638772ms for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.139569   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.139583   58840 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.143636   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.143657   58840 pod_ready.go:81] duration metric: took 4.061011ms for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.143669   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.143677   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.147479   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.147499   58840 pod_ready.go:81] duration metric: took 3.813494ms for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.147510   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.147517   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.247677   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.247710   58840 pod_ready.go:81] duration metric: took 100.183226ms for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.247723   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.247730   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.648289   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-proxy-7ddtk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.648339   58840 pod_ready.go:81] duration metric: took 400.599602ms for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.648352   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-proxy-7ddtk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.648365   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:59.047071   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.047099   58840 pod_ready.go:81] duration metric: took 398.721478ms for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:59.047108   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.047114   58840 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:59.447840   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.447866   58840 pod_ready.go:81] duration metric: took 400.738125ms for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:59.447876   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.447885   58840 pod_ready.go:38] duration metric: took 1.318647527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:59.447899   58840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:46:59.459985   58840 ops.go:34] apiserver oom_adj: -16
	I0520 11:46:59.460011   58840 kubeadm.go:591] duration metric: took 9.176298972s to restartPrimaryControlPlane
	I0520 11:46:59.460023   58840 kubeadm.go:393] duration metric: took 9.227629245s to StartCluster
	I0520 11:46:59.460038   58840 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:59.460120   58840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:59.461897   58840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:59.462172   58840 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:46:59.464109   58840 out.go:177] * Verifying Kubernetes components...
	I0520 11:46:59.462263   58840 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:46:59.462375   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:59.465582   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:59.464212   58840 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465645   58840 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.465657   58840 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:46:59.465693   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.464218   58840 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465752   58840 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.465768   58840 addons.go:243] addon metrics-server should already be in state true
	I0520 11:46:59.465794   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.464219   58840 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465877   58840 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-668445"
	I0520 11:46:59.466087   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466116   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.466210   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466216   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466232   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.466235   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.481963   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0520 11:46:59.482030   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45203
	I0520 11:46:59.482551   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.482768   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.483115   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.483140   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.483495   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.483516   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.483759   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.483930   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.483979   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.484484   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.484509   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.485272   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0520 11:46:59.485657   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.486686   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.486716   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.487080   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.487552   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.487588   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.487989   58840 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.488007   58840 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:46:59.488037   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.488349   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.488384   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.500796   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0520 11:46:59.501376   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.501793   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0520 11:46:59.502058   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.502098   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.502308   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.502439   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.502661   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.502737   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0520 11:46:59.502976   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.503009   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.503199   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.503329   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.503548   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.503648   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.503659   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.504278   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.504632   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.504814   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.506755   58840 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:46:59.504875   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.505278   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.508201   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:46:59.508219   58840 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:46:59.508238   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.509465   58840 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:59.510750   58840 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:59.510765   58840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:46:59.510778   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.511023   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.511444   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.511480   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.511707   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.511939   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.512117   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.512271   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.513612   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.513980   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.514009   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.514207   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.514354   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.514460   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.514564   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.523003   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I0520 11:46:59.523361   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.523835   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.523858   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.524115   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.524266   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.525489   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.525661   58840 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:59.525673   58840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:46:59.525685   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.527762   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.528064   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.528087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.528157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.528309   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.528414   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.528559   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.652795   58840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:59.672785   58840 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-668445" to be "Ready" ...
	I0520 11:46:59.759683   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:59.760767   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:59.775512   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:46:59.775536   58840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:46:59.807608   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:46:59.807633   58840 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:46:59.835597   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:59.835624   58840 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:46:59.862717   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:47:00.145002   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.145032   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.145352   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.145352   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.145372   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.145380   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.145389   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.145645   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.145700   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.145717   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.150992   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.151008   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.151358   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.151376   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.151385   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.802987   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803016   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803030   58840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.042235492s)
	I0520 11:47:00.803063   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803081   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803354   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.803386   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803404   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803422   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803424   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803434   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803439   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803449   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803464   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803621   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803631   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803642   58840 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-668445"
	I0520 11:47:00.803677   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.803729   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803743   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.805628   58840 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0520 11:46:58.478246   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:58.478733   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:58.478765   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:58.478670   59753 retry.go:31] will retry after 3.220647355s: waiting for machine to come up
	I0520 11:46:56.966878   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.966485   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.466861   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.967023   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.467329   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.966889   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.466799   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.966521   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:01.466922   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.806818   58840 addons.go:505] duration metric: took 1.344560138s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0520 11:47:01.675861   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.863455   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:01.863859   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:01.702053   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.702503   57519 main.go:141] libmachine: (no-preload-814599) Found IP for machine: 192.168.39.185
	I0520 11:47:01.702519   57519 main.go:141] libmachine: (no-preload-814599) Reserving static IP address...
	I0520 11:47:01.702532   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has current primary IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.702964   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.702994   57519 main.go:141] libmachine: (no-preload-814599) Reserved static IP address: 192.168.39.185
	I0520 11:47:01.703017   57519 main.go:141] libmachine: (no-preload-814599) DBG | skip adding static IP to network mk-no-preload-814599 - found existing host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"}
	I0520 11:47:01.703036   57519 main.go:141] libmachine: (no-preload-814599) DBG | Getting to WaitForSSH function...
	I0520 11:47:01.703048   57519 main.go:141] libmachine: (no-preload-814599) Waiting for SSH to be available...
	I0520 11:47:01.705097   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.705376   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.705409   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.705482   57519 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH client type: external
	I0520 11:47:01.705511   57519 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa (-rw-------)
	I0520 11:47:01.705543   57519 main.go:141] libmachine: (no-preload-814599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:47:01.705558   57519 main.go:141] libmachine: (no-preload-814599) DBG | About to run SSH command:
	I0520 11:47:01.705593   57519 main.go:141] libmachine: (no-preload-814599) DBG | exit 0
	I0520 11:47:01.833182   57519 main.go:141] libmachine: (no-preload-814599) DBG | SSH cmd err, output: <nil>: 
	I0520 11:47:01.833551   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetConfigRaw
	I0520 11:47:01.834149   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:01.836532   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.837008   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.837047   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.837310   57519 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json ...
	I0520 11:47:01.837509   57519 machine.go:94] provisionDockerMachine start ...
	I0520 11:47:01.837527   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:01.837717   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:01.839814   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.840151   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.840171   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.840335   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:01.840485   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.840631   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.840781   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:01.840958   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:01.841138   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:01.841149   57519 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:47:01.953190   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:47:01.953221   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:01.953480   57519 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:47:01.953504   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:01.953667   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:01.956155   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.956487   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.956518   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.956684   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:01.956884   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.957071   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.957239   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:01.957416   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:01.957580   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:01.957593   57519 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-814599 && echo "no-preload-814599" | sudo tee /etc/hostname
	I0520 11:47:02.079993   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-814599
	
	I0520 11:47:02.080018   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.082623   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.082950   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.082984   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.083154   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.083353   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.083520   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.083677   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.083852   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:02.084014   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:02.084029   57519 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-814599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-814599/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-814599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:47:02.204190   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:47:02.204225   57519 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:47:02.204252   57519 buildroot.go:174] setting up certificates
	I0520 11:47:02.204264   57519 provision.go:84] configureAuth start
	I0520 11:47:02.204280   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:02.204562   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:02.207132   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.207477   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.207501   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.207633   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.209905   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.210212   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.210231   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.210414   57519 provision.go:143] copyHostCerts
	I0520 11:47:02.210472   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:47:02.210493   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:47:02.210560   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:47:02.210743   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:47:02.210758   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:47:02.210795   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:47:02.210875   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:47:02.210883   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:47:02.210905   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:47:02.210962   57519 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.no-preload-814599 san=[127.0.0.1 192.168.39.185 localhost minikube no-preload-814599]
	I0520 11:47:02.411383   57519 provision.go:177] copyRemoteCerts
	I0520 11:47:02.411436   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:47:02.411458   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.414071   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.414443   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.414473   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.414675   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.414859   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.415006   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.415125   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:02.507148   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:47:02.536944   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0520 11:47:02.562291   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:47:02.589034   57519 provision.go:87] duration metric: took 384.752486ms to configureAuth
	I0520 11:47:02.589092   57519 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:47:02.589320   57519 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:47:02.589412   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.592287   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.592704   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.592731   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.592957   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.593180   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.593368   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.593561   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.593724   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:02.593910   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:02.593931   57519 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:47:02.899943   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:47:02.900037   57519 machine.go:97] duration metric: took 1.062511085s to provisionDockerMachine
	I0520 11:47:02.900074   57519 start.go:293] postStartSetup for "no-preload-814599" (driver="kvm2")
	I0520 11:47:02.900112   57519 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:47:02.900155   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:02.900546   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:47:02.900581   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.903643   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.904057   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.904089   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.904253   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.904433   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.904580   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.904710   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:02.999752   57519 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:47:03.005832   57519 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:47:03.005856   57519 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:47:03.005940   57519 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:47:03.006029   57519 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:47:03.006145   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:47:03.016853   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:47:03.049845   57519 start.go:296] duration metric: took 149.725ms for postStartSetup
	I0520 11:47:03.049886   57519 fix.go:56] duration metric: took 19.183954619s for fixHost
	I0520 11:47:03.049912   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.052841   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.053175   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.053203   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.053342   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.053537   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.053730   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.053899   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.054048   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:03.054227   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:03.054241   57519 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:47:03.166461   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205623.121199957
	
	I0520 11:47:03.166490   57519 fix.go:216] guest clock: 1716205623.121199957
	I0520 11:47:03.166500   57519 fix.go:229] Guest: 2024-05-20 11:47:03.121199957 +0000 UTC Remote: 2024-05-20 11:47:03.049891315 +0000 UTC m=+366.742654246 (delta=71.308642ms)
	I0520 11:47:03.166523   57519 fix.go:200] guest clock delta is within tolerance: 71.308642ms
	I0520 11:47:03.166530   57519 start.go:83] releasing machines lock for "no-preload-814599", held for 19.30063132s
	I0520 11:47:03.166554   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.166858   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:03.169486   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.169816   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.169861   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.169990   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170546   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170728   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170814   57519 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:47:03.170875   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.170920   57519 ssh_runner.go:195] Run: cat /version.json
	I0520 11:47:03.170944   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.173717   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.173944   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174128   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.174153   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174338   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.174365   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174371   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.174570   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.174571   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.174781   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.174793   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.174956   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.175049   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:03.175169   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	W0520 11:47:03.276456   57519 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:47:03.276537   57519 ssh_runner.go:195] Run: systemctl --version
	I0520 11:47:03.282946   57519 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:47:03.428948   57519 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:47:03.436462   57519 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:47:03.436529   57519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:47:03.453022   57519 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:47:03.453045   57519 start.go:494] detecting cgroup driver to use...
	I0520 11:47:03.453103   57519 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:47:03.469505   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:47:03.483617   57519 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:47:03.483673   57519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:47:03.498514   57519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:47:03.513169   57519 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:47:03.638355   57519 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:47:03.819407   57519 docker.go:233] disabling docker service ...
	I0520 11:47:03.819473   57519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:47:03.837928   57519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:47:03.853747   57519 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:47:03.976480   57519 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:47:04.097685   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:47:04.112917   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:47:04.132979   57519 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:47:04.133044   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.145501   57519 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:47:04.145594   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.157957   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.168774   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.180521   57519 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:47:04.191739   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.203232   57519 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.221386   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.231999   57519 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:47:04.243401   57519 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:47:04.243447   57519 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:47:04.259232   57519 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:47:04.270945   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:47:04.387908   57519 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:47:04.526507   57519 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:47:04.526588   57519 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:47:04.531568   57519 start.go:562] Will wait 60s for crictl version
	I0520 11:47:04.531628   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:04.535751   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:47:04.583384   57519 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:47:04.583470   57519 ssh_runner.go:195] Run: crio --version
	I0520 11:47:04.617545   57519 ssh_runner.go:195] Run: crio --version
	I0520 11:47:04.652685   57519 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:47:04.654199   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:04.657274   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:04.657624   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:04.657653   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:04.657858   57519 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 11:47:04.662344   57519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:47:04.676310   57519 kubeadm.go:877] updating cluster {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:47:04.676437   57519 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:47:04.676480   57519 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:47:04.723427   57519 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:47:04.723452   57519 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:47:04.723502   57519 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:04.723705   57519 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:04.723729   57519 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:04.723773   57519 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:04.723829   57519 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:04.723896   57519 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:04.723913   57519 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 11:47:04.723767   57519 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 11:47:04.725657   57519 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:04.725657   57519 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:04.725737   57519 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:04.725630   57519 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:04.725696   57519 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.849687   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.896118   57519 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0520 11:47:04.896166   57519 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.896214   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:04.901457   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.940481   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 11:47:04.940581   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.946097   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0520 11:47:04.946120   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.946168   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.958539   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:05.058882   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:05.059155   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:05.061672   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0520 11:47:05.082727   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:05.087840   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:05.606193   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:01.966435   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.467352   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.967335   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.467049   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.466715   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.966960   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.466641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.966993   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:06.466684   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.677231   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:47:06.176977   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:47:06.677184   58840 node_ready.go:49] node "default-k8s-diff-port-668445" has status "Ready":"True"
	I0520 11:47:06.677208   58840 node_ready.go:38] duration metric: took 7.004390787s for node "default-k8s-diff-port-668445" to be "Ready" ...
	I0520 11:47:06.677217   58840 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:47:06.684890   58840 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.690323   58840 pod_ready.go:92] pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.690345   58840 pod_ready.go:81] duration metric: took 5.420403ms for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.690354   58840 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.695944   58840 pod_ready.go:92] pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.695968   58840 pod_ready.go:81] duration metric: took 5.606311ms for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.695979   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.701932   58840 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.701953   58840 pod_ready.go:81] duration metric: took 5.965003ms for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.701966   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:04.363354   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:06.363448   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:08.125249   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1: (3.166660678s)
	I0520 11:47:08.125272   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (3.179080546s)
	I0520 11:47:08.125294   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0520 11:47:08.125314   57519 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0520 11:47:08.125343   57519 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:08.125346   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1: (3.066428315s)
	I0520 11:47:08.125383   57519 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0520 11:47:08.125398   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125415   57519 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:08.125415   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (3.066238572s)
	I0520 11:47:08.125472   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125517   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9: (3.063820229s)
	I0520 11:47:08.125478   57519 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0520 11:47:08.125582   57519 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:08.125596   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1: (3.04284016s)
	I0520 11:47:08.125612   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125637   57519 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0520 11:47:08.125659   57519 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:08.125668   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1: (3.037804525s)
	I0520 11:47:08.125696   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125707   57519 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0520 11:47:08.125709   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.519483497s)
	I0520 11:47:08.125727   57519 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:08.125758   57519 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0520 11:47:08.125775   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125808   57519 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:08.125843   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.140377   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:08.140435   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:08.140472   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:08.140518   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:08.140550   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:08.140604   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:08.277671   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 11:47:08.277785   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.286159   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 11:47:08.286163   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 11:47:08.286261   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:08.286297   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:08.287554   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0520 11:47:08.287623   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:08.300162   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 11:47:08.300183   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0520 11:47:08.300197   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.300203   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0520 11:47:08.300221   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0520 11:47:08.300235   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0520 11:47:08.300168   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 11:47:08.300245   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.300254   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:08.300315   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:08.304669   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0520 11:47:10.580290   57519 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.279947927s)
	I0520 11:47:10.580392   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0520 11:47:10.580421   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.280153203s)
	I0520 11:47:10.580444   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0520 11:47:10.580466   57519 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:10.580514   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:06.966995   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.467261   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.967171   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.467104   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.967112   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.466446   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.967351   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:10.467225   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:10.467292   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:10.507451   58316 cri.go:89] found id: ""
	I0520 11:47:10.507477   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.507487   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:10.507494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:10.507557   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:10.548291   58316 cri.go:89] found id: ""
	I0520 11:47:10.548317   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.548329   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:10.548337   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:10.548401   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:10.592870   58316 cri.go:89] found id: ""
	I0520 11:47:10.592899   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.592909   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:10.592916   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:10.592997   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:10.637344   58316 cri.go:89] found id: ""
	I0520 11:47:10.637376   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.637387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:10.637395   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:10.637466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:10.679759   58316 cri.go:89] found id: ""
	I0520 11:47:10.679794   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.679802   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:10.679807   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:10.679868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:10.720179   58316 cri.go:89] found id: ""
	I0520 11:47:10.720215   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.720250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:10.720259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:10.720324   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:10.761279   58316 cri.go:89] found id: ""
	I0520 11:47:10.761308   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.761320   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:10.761327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:10.761383   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:10.799008   58316 cri.go:89] found id: ""
	I0520 11:47:10.799037   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.799048   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:10.799059   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:10.799072   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:10.850507   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:10.850537   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:10.867287   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:10.867317   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:11.007373   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:11.007401   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:11.007420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:11.090037   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:11.090074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:07.717364   58840 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:07.717391   58840 pod_ready.go:81] duration metric: took 1.01541621s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:07.717404   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:08.041632   58840 pod_ready.go:92] pod "kube-proxy-7ddtk" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:08.041657   58840 pod_ready.go:81] duration metric: took 324.24633ms for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:08.041669   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:10.047877   58840 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:08.364366   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:10.366662   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:12.895005   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:11.650206   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.069627914s)
	I0520 11:47:11.650241   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 11:47:11.650265   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:11.650322   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:13.011174   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.360821671s)
	I0520 11:47:13.011203   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0520 11:47:13.011225   57519 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:13.011270   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:13.633150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:13.650425   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:13.650494   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:13.690842   58316 cri.go:89] found id: ""
	I0520 11:47:13.690869   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.690880   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:13.690887   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:13.690947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:13.727462   58316 cri.go:89] found id: ""
	I0520 11:47:13.727492   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.727504   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:13.727511   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:13.727570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:13.765767   58316 cri.go:89] found id: ""
	I0520 11:47:13.765797   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.765807   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:13.765816   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:13.765877   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:13.800210   58316 cri.go:89] found id: ""
	I0520 11:47:13.800239   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.800248   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:13.800255   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:13.800316   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:13.836481   58316 cri.go:89] found id: ""
	I0520 11:47:13.836513   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.836525   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:13.836533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:13.836601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:13.881888   58316 cri.go:89] found id: ""
	I0520 11:47:13.881914   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.881923   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:13.881932   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:13.881990   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:13.916896   58316 cri.go:89] found id: ""
	I0520 11:47:13.916955   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.916966   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:13.916973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:13.917037   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:13.950345   58316 cri.go:89] found id: ""
	I0520 11:47:13.950372   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.950381   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:13.950391   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:13.950415   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:14.039076   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:14.039114   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:14.086767   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:14.086794   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:14.140377   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:14.140418   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:14.158036   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:14.158074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:14.255195   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:12.048869   58840 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:12.048894   58840 pod_ready.go:81] duration metric: took 4.007218109s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:12.048903   58840 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:14.057711   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:16.555471   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:15.365437   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:17.864290   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:16.885885   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.874589428s)
	I0520 11:47:16.885917   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0520 11:47:16.885952   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:16.886034   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:18.674100   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.788037677s)
	I0520 11:47:18.674128   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0520 11:47:18.674149   57519 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:18.674197   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:20.543671   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.86944462s)
	I0520 11:47:20.543705   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0520 11:47:20.543733   57519 cache_images.go:123] Successfully loaded all cached images
	I0520 11:47:20.543739   57519 cache_images.go:92] duration metric: took 15.820273224s to LoadCachedImages
	I0520 11:47:20.543749   57519 kubeadm.go:928] updating node { 192.168.39.185 8443 v1.30.1 crio true true} ...
	I0520 11:47:20.543863   57519 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-814599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:47:20.543944   57519 ssh_runner.go:195] Run: crio config
	I0520 11:47:20.593647   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:47:20.593672   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:47:20.593692   57519 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:47:20.593719   57519 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-814599 NodeName:no-preload-814599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:47:20.593863   57519 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-814599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:47:20.593936   57519 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:47:20.604739   57519 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:47:20.604791   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:47:20.614029   57519 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 11:47:20.630587   57519 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:47:20.647296   57519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0520 11:47:20.664453   57519 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0520 11:47:20.668296   57519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:47:20.681086   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:47:20.816542   57519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:47:20.834344   57519 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599 for IP: 192.168.39.185
	I0520 11:47:20.834369   57519 certs.go:194] generating shared ca certs ...
	I0520 11:47:20.834388   57519 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:47:20.834578   57519 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:47:20.834634   57519 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:47:20.834649   57519 certs.go:256] generating profile certs ...
	I0520 11:47:20.834777   57519 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.key
	I0520 11:47:20.834876   57519 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5
	I0520 11:47:20.834928   57519 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key
	I0520 11:47:20.835090   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:47:20.835146   57519 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:47:20.835159   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:47:20.835192   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:47:20.835226   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:47:20.835255   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:47:20.835314   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:47:20.836325   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:47:20.883227   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:47:20.918566   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:47:20.959097   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:47:21.006422   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 11:47:21.035717   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:47:21.071087   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:47:21.094330   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:47:21.119597   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:47:21.145488   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:47:21.173152   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:47:21.197627   57519 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:47:21.214595   57519 ssh_runner.go:195] Run: openssl version
	I0520 11:47:21.220678   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:47:21.232251   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.236953   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.237015   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.243048   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:47:21.253667   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:47:21.264223   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.268607   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.268648   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.274112   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:47:21.284427   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:47:21.294811   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.299403   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.299455   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.304954   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:47:21.315320   57519 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:47:21.319956   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:47:21.325692   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:47:21.331232   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:47:21.337240   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:47:16.755906   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:16.772659   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:16.772736   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:16.821927   58316 cri.go:89] found id: ""
	I0520 11:47:16.821954   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.821964   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:16.821971   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:16.822033   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:16.863636   58316 cri.go:89] found id: ""
	I0520 11:47:16.863663   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.863673   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:16.863681   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:16.863749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:16.900524   58316 cri.go:89] found id: ""
	I0520 11:47:16.900553   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.900561   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:16.900566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:16.900625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:16.949484   58316 cri.go:89] found id: ""
	I0520 11:47:16.949517   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.949527   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:16.949535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:16.949597   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:16.994006   58316 cri.go:89] found id: ""
	I0520 11:47:16.994035   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.994045   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:16.994054   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:16.994111   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:17.030100   58316 cri.go:89] found id: ""
	I0520 11:47:17.030130   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.030141   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:17.030147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:17.030195   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:17.069067   58316 cri.go:89] found id: ""
	I0520 11:47:17.069089   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.069097   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:17.069102   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:17.069146   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:17.105102   58316 cri.go:89] found id: ""
	I0520 11:47:17.105131   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.105141   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:17.105151   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:17.105165   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:17.159912   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:17.159937   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:17.212192   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:17.212227   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:17.227761   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:17.227802   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:17.305724   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:17.305753   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:17.305770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:19.886189   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:19.900397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:19.900475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:19.940473   58316 cri.go:89] found id: ""
	I0520 11:47:19.940502   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.940509   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:19.940533   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:19.940582   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:19.985561   58316 cri.go:89] found id: ""
	I0520 11:47:19.985594   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.985605   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:19.985611   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:19.985676   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:20.040865   58316 cri.go:89] found id: ""
	I0520 11:47:20.040895   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.040906   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:20.040913   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:20.040991   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:20.086122   58316 cri.go:89] found id: ""
	I0520 11:47:20.086149   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.086159   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:20.086167   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:20.086228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:20.122905   58316 cri.go:89] found id: ""
	I0520 11:47:20.122931   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.122938   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:20.122943   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:20.123055   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:20.165966   58316 cri.go:89] found id: ""
	I0520 11:47:20.165993   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.166004   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:20.166012   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:20.166064   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:20.206010   58316 cri.go:89] found id: ""
	I0520 11:47:20.206043   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.206054   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:20.206062   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:20.206126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:20.243540   58316 cri.go:89] found id: ""
	I0520 11:47:20.243565   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.243573   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:20.243582   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:20.243595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:20.327278   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:20.327298   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:20.327313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:20.423064   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:20.423096   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:20.467707   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:20.467742   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:20.522746   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:20.522782   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:18.556589   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:20.557476   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:20.363536   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:22.862952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:21.342881   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:47:21.349172   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:47:21.354721   57519 kubeadm.go:391] StartCluster: {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:47:21.354839   57519 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:47:21.354923   57519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:47:21.399473   57519 cri.go:89] found id: ""
	I0520 11:47:21.399554   57519 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:47:21.413632   57519 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:47:21.413656   57519 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:47:21.413662   57519 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:47:21.413712   57519 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:47:21.425556   57519 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:47:21.426515   57519 kubeconfig.go:125] found "no-preload-814599" server: "https://192.168.39.185:8443"
	I0520 11:47:21.428556   57519 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:47:21.438628   57519 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.185
	I0520 11:47:21.438655   57519 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:47:21.438668   57519 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:47:21.438708   57519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:47:21.485080   57519 cri.go:89] found id: ""
	I0520 11:47:21.485149   57519 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:47:21.502591   57519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:47:21.512148   57519 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:47:21.512168   57519 kubeadm.go:156] found existing configuration files:
	
	I0520 11:47:21.512208   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:47:21.521378   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:47:21.521434   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:47:21.531568   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:47:21.541404   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:47:21.541459   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:47:21.552018   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:47:21.562667   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:47:21.562711   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:47:21.572240   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:47:21.581472   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:47:21.581535   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:47:21.591072   57519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:47:21.601114   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:21.716435   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.597273   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.792224   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.866757   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.999294   57519 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:47:22.999389   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.499642   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:24.000424   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:24.022646   57519 api_server.go:72] duration metric: took 1.023349447s to wait for apiserver process to appear ...
	I0520 11:47:24.022673   57519 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:47:24.022691   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:24.023162   57519 api_server.go:269] stopped: https://192.168.39.185:8443/healthz: Get "https://192.168.39.185:8443/healthz": dial tcp 192.168.39.185:8443: connect: connection refused
	I0520 11:47:24.523760   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:23.037165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.051438   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:23.051490   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:23.088156   58316 cri.go:89] found id: ""
	I0520 11:47:23.088186   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.088197   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:23.088204   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:23.088266   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:23.123357   58316 cri.go:89] found id: ""
	I0520 11:47:23.123384   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.123394   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:23.123401   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:23.123461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:23.159579   58316 cri.go:89] found id: ""
	I0520 11:47:23.159612   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.159622   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:23.159630   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:23.159694   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:23.195471   58316 cri.go:89] found id: ""
	I0520 11:47:23.195507   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.195518   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:23.195525   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:23.195584   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:23.231121   58316 cri.go:89] found id: ""
	I0520 11:47:23.231148   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.231159   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:23.231165   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:23.231225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:23.267809   58316 cri.go:89] found id: ""
	I0520 11:47:23.267833   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.267841   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:23.267847   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:23.267897   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:23.306589   58316 cri.go:89] found id: ""
	I0520 11:47:23.306608   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.306615   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:23.306621   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:23.306678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:23.339285   58316 cri.go:89] found id: ""
	I0520 11:47:23.339308   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.339317   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:23.339329   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:23.339343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:23.406157   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:23.406196   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:23.420874   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:23.420911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:23.503049   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:23.503077   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:23.503092   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:23.585189   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:23.585222   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:26.130594   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:26.146278   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:26.146366   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:26.189721   58316 cri.go:89] found id: ""
	I0520 11:47:26.189747   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.189754   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:26.189760   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:26.189809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:26.228060   58316 cri.go:89] found id: ""
	I0520 11:47:26.228084   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.228098   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:26.228104   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:26.228171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:26.262360   58316 cri.go:89] found id: ""
	I0520 11:47:26.262390   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.262400   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:26.262407   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:26.262471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:26.297981   58316 cri.go:89] found id: ""
	I0520 11:47:26.298010   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.298022   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:26.298029   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:26.298094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:26.339873   58316 cri.go:89] found id: ""
	I0520 11:47:26.339901   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.339911   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:26.339918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:26.339983   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:26.376366   58316 cri.go:89] found id: ""
	I0520 11:47:26.376393   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.376404   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:26.376411   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:26.376471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:26.410803   58316 cri.go:89] found id: ""
	I0520 11:47:26.410839   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.410851   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:26.410859   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:26.410921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:26.445392   58316 cri.go:89] found id: ""
	I0520 11:47:26.445420   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.445432   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:26.445443   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:26.445458   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:26.499244   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:26.499276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:26.513224   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:26.513259   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:26.589623   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:26.589647   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:26.589661   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:26.676875   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:26.676918   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:23.055887   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:25.555044   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:24.863476   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:27.363148   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:27.158548   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:47:27.158572   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:47:27.158585   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:27.205830   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:47:27.205867   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:47:27.523283   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:27.527618   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:27.527643   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:28.023238   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:28.029601   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:28.029630   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:28.522849   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:28.528350   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:28.528372   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:29.023422   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:29.027460   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:29.027493   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:29.523739   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:29.528289   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:29.528317   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:30.022859   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:30.027324   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:30.027348   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:30.522929   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:30.527596   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:47:30.534571   57519 api_server.go:141] control plane version: v1.30.1
	I0520 11:47:30.534592   57519 api_server.go:131] duration metric: took 6.511912744s to wait for apiserver health ...
	I0520 11:47:30.534601   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:47:30.534609   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:47:30.536647   57519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:47:30.537896   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:47:30.548779   57519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:47:30.568124   57519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:47:30.577489   57519 system_pods.go:59] 8 kube-system pods found
	I0520 11:47:30.577522   57519 system_pods.go:61] "coredns-7db6d8ff4d-tg5bl" [ec723a9c-933c-4d25-9b56-e39a78adbbb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:47:30.577530   57519 system_pods.go:61] "etcd-no-preload-814599" [d0e5ca15-cc81-4dc2-9bc4-0e9b88c7b49d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:47:30.577538   57519 system_pods.go:61] "kube-apiserver-no-preload-814599" [81f50182-5190-4168-b97a-f5a7a6348247] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:47:30.577544   57519 system_pods.go:61] "kube-controller-manager-no-preload-814599" [871504b1-6e24-498b-9563-241b7a7422d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:47:30.577550   57519 system_pods.go:61] "kube-proxy-mqg49" [0f3b3b30-9789-4a0b-b636-e6db16e1ce0d] Running
	I0520 11:47:30.577560   57519 system_pods.go:61] "kube-scheduler-no-preload-814599" [028942b5-5187-4b86-84ba-eba2cdb7a844] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:47:30.577567   57519 system_pods.go:61] "metrics-server-569cc877fc-v2zh9" [1be844eb-66ba-4a82-86fb-6f53de7e4d33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:47:30.577577   57519 system_pods.go:61] "storage-provisioner" [84c706cd-0245-4afa-bb7b-4644bd2784bd] Running
	I0520 11:47:30.577586   57519 system_pods.go:74] duration metric: took 9.439264ms to wait for pod list to return data ...
	I0520 11:47:30.577598   57519 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:47:30.581346   57519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:47:30.581367   57519 node_conditions.go:123] node cpu capacity is 2
	I0520 11:47:30.581377   57519 node_conditions.go:105] duration metric: took 3.772205ms to run NodePressure ...
	I0520 11:47:30.581398   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:30.849703   57519 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:47:30.853895   57519 kubeadm.go:733] kubelet initialised
	I0520 11:47:30.853917   57519 kubeadm.go:734] duration metric: took 4.191303ms waiting for restarted kubelet to initialise ...
	I0520 11:47:30.853925   57519 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:47:30.861807   57519 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.866768   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.866797   57519 pod_ready.go:81] duration metric: took 4.964994ms for pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.866809   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.866819   57519 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.871001   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "etcd-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.871027   57519 pod_ready.go:81] duration metric: took 4.200278ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.871038   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "etcd-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.871046   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.875615   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "kube-apiserver-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.875635   57519 pod_ready.go:81] duration metric: took 4.580378ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.875645   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "kube-apiserver-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.875655   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.971186   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.971210   57519 pod_ready.go:81] duration metric: took 95.545422ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.971219   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.971226   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:29.250626   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:29.265658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:29.265716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:29.307618   58316 cri.go:89] found id: ""
	I0520 11:47:29.307640   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.307651   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:29.307658   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:29.307717   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:29.351325   58316 cri.go:89] found id: ""
	I0520 11:47:29.351353   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.351364   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:29.351372   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:29.351444   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:29.392415   58316 cri.go:89] found id: ""
	I0520 11:47:29.392439   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.392449   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:29.392455   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:29.392501   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:29.435195   58316 cri.go:89] found id: ""
	I0520 11:47:29.435220   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.435227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:29.435233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:29.435287   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:29.476746   58316 cri.go:89] found id: ""
	I0520 11:47:29.476773   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.476784   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:29.476790   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:29.476875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:29.511513   58316 cri.go:89] found id: ""
	I0520 11:47:29.511545   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.511553   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:29.511559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:29.511621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:29.547563   58316 cri.go:89] found id: ""
	I0520 11:47:29.547583   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.547590   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:29.547598   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:29.547644   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:29.586438   58316 cri.go:89] found id: ""
	I0520 11:47:29.586464   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.586472   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:29.586480   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:29.586491   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:29.660795   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:29.660828   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:29.699538   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:29.699568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:29.751675   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:29.751711   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:29.766924   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:29.766960   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:29.840738   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:27.555356   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:29.556470   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:29.862807   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:32.364158   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:31.372203   57519 pod_ready.go:92] pod "kube-proxy-mqg49" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:31.372223   57519 pod_ready.go:81] duration metric: took 400.985089ms for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:31.372231   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:33.379905   57519 pod_ready.go:102] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:35.378764   57519 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:35.378793   57519 pod_ready.go:81] duration metric: took 4.006551156s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:35.378804   57519 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:32.341347   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:32.355727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:32.355784   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:32.399703   58316 cri.go:89] found id: ""
	I0520 11:47:32.399740   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.399749   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:32.399754   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:32.399813   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:32.440321   58316 cri.go:89] found id: ""
	I0520 11:47:32.440350   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.440362   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:32.440367   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:32.440427   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:32.477813   58316 cri.go:89] found id: ""
	I0520 11:47:32.477840   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.477848   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:32.477853   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:32.477899   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:32.524465   58316 cri.go:89] found id: ""
	I0520 11:47:32.524492   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.524507   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:32.524516   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:32.524574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:32.559287   58316 cri.go:89] found id: ""
	I0520 11:47:32.559316   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.559326   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:32.559334   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:32.559390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:32.595977   58316 cri.go:89] found id: ""
	I0520 11:47:32.596010   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.596022   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:32.596030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:32.596088   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:32.631700   58316 cri.go:89] found id: ""
	I0520 11:47:32.631730   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.631739   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:32.631746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:32.631818   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:32.668027   58316 cri.go:89] found id: ""
	I0520 11:47:32.668062   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.668074   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:32.668084   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:32.668097   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:32.719910   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:32.719940   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:32.733988   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:32.734011   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:32.805852   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:32.805871   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:32.805886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:32.884785   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:32.884814   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:35.425234   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:35.438512   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:35.438583   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:35.471964   58316 cri.go:89] found id: ""
	I0520 11:47:35.471989   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.471997   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:35.472003   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:35.472058   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:35.505663   58316 cri.go:89] found id: ""
	I0520 11:47:35.505695   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.505706   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:35.505714   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:35.505766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:35.540407   58316 cri.go:89] found id: ""
	I0520 11:47:35.540437   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.540447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:35.540454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:35.540516   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:35.576599   58316 cri.go:89] found id: ""
	I0520 11:47:35.576622   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.576629   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:35.576634   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:35.576689   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:35.610257   58316 cri.go:89] found id: ""
	I0520 11:47:35.610286   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.610297   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:35.610304   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:35.610367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:35.647305   58316 cri.go:89] found id: ""
	I0520 11:47:35.647330   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.647338   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:35.647344   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:35.647393   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:35.682149   58316 cri.go:89] found id: ""
	I0520 11:47:35.682180   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.682191   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:35.682198   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:35.682260   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:35.717536   58316 cri.go:89] found id: ""
	I0520 11:47:35.717561   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.717569   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:35.717576   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:35.717588   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:35.768734   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:35.768763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:35.783246   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:35.783276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:35.869299   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:35.869325   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:35.869343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:35.948083   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:35.948116   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:32.056046   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:34.555606   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:36.556231   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:34.862916   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:37.363328   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:37.385861   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:39.884432   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:38.498413   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:38.512549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:38.512627   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:38.545580   58316 cri.go:89] found id: ""
	I0520 11:47:38.545601   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.545608   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:38.545614   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:38.545660   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.583115   58316 cri.go:89] found id: ""
	I0520 11:47:38.583145   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.583156   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:38.583163   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:38.583210   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:38.619548   58316 cri.go:89] found id: ""
	I0520 11:47:38.619578   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.619588   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:38.619596   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:38.619664   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:38.664240   58316 cri.go:89] found id: ""
	I0520 11:47:38.664272   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.664282   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:38.664290   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:38.664347   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:38.703438   58316 cri.go:89] found id: ""
	I0520 11:47:38.703460   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.703468   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:38.703472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:38.703546   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:38.740712   58316 cri.go:89] found id: ""
	I0520 11:47:38.740742   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.740754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:38.740761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:38.740821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:38.785274   58316 cri.go:89] found id: ""
	I0520 11:47:38.785296   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.785305   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:38.785312   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:38.785370   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:38.827757   58316 cri.go:89] found id: ""
	I0520 11:47:38.827787   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.827798   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:38.827810   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:38.827825   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:38.880074   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:38.880105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:38.894643   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:38.894688   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:38.982821   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:38.982849   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:38.982872   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:39.066426   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:39.066457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:41.610439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:41.623306   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:41.623374   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:41.662703   58316 cri.go:89] found id: ""
	I0520 11:47:41.662731   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.662740   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:41.662747   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:41.662803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.557813   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.054616   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:39.863318   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:42.362965   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.893841   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:44.387666   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.699406   58316 cri.go:89] found id: ""
	I0520 11:47:41.699433   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.699443   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:41.699449   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:41.699493   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:41.733448   58316 cri.go:89] found id: ""
	I0520 11:47:41.733475   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.733485   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:41.733492   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:41.733556   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:41.782101   58316 cri.go:89] found id: ""
	I0520 11:47:41.782131   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.782142   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:41.782149   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:41.782213   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:41.828303   58316 cri.go:89] found id: ""
	I0520 11:47:41.828332   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.828343   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:41.828350   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:41.828438   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:41.865690   58316 cri.go:89] found id: ""
	I0520 11:47:41.865712   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.865721   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:41.865726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:41.865773   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:41.907954   58316 cri.go:89] found id: ""
	I0520 11:47:41.907982   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.907992   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:41.907999   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:41.908060   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:41.946853   58316 cri.go:89] found id: ""
	I0520 11:47:41.946879   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.946889   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:41.946901   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:41.946916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:42.001481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:42.001516   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:42.015383   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:42.015412   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:42.088155   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:42.088184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:42.088201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:42.172771   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:42.172801   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:44.714665   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:44.727782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:44.727866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:44.760377   58316 cri.go:89] found id: ""
	I0520 11:47:44.760405   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.760412   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:44.760418   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:44.760466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:44.793488   58316 cri.go:89] found id: ""
	I0520 11:47:44.793509   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.793517   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:44.793522   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:44.793570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:44.827752   58316 cri.go:89] found id: ""
	I0520 11:47:44.827777   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.827786   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:44.827792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:44.827840   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:44.864415   58316 cri.go:89] found id: ""
	I0520 11:47:44.864437   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.864446   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:44.864454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:44.864514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:44.904360   58316 cri.go:89] found id: ""
	I0520 11:47:44.904382   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.904391   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:44.904396   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:44.904452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:44.939969   58316 cri.go:89] found id: ""
	I0520 11:47:44.940000   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.940010   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:44.940019   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:44.940069   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:44.979521   58316 cri.go:89] found id: ""
	I0520 11:47:44.979544   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.979552   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:44.979557   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:44.979621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:45.012946   58316 cri.go:89] found id: ""
	I0520 11:47:45.012979   58316 logs.go:276] 0 containers: []
	W0520 11:47:45.012992   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:45.013002   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:45.013015   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:45.067582   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:45.067619   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:45.081497   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:45.081529   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:45.155312   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:45.155335   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:45.155351   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:45.232403   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:45.232439   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:43.055052   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:45.056688   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:44.863217   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:47.362713   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:46.886365   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.384568   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:47.774002   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:47.789020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:47.789078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:47.826284   58316 cri.go:89] found id: ""
	I0520 11:47:47.826319   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.826331   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:47.826340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:47.826404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:47.866588   58316 cri.go:89] found id: ""
	I0520 11:47:47.866610   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.866616   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:47.866621   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:47.866675   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:47.901556   58316 cri.go:89] found id: ""
	I0520 11:47:47.901582   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.901593   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:47.901600   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:47.901658   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:47.937563   58316 cri.go:89] found id: ""
	I0520 11:47:47.937586   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.937593   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:47.937599   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:47.937656   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:47.975493   58316 cri.go:89] found id: ""
	I0520 11:47:47.975517   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.975527   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:47.975535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:47.975594   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:48.012208   58316 cri.go:89] found id: ""
	I0520 11:47:48.012232   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.012239   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:48.012245   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:48.012303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:48.046951   58316 cri.go:89] found id: ""
	I0520 11:47:48.046979   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.046989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:48.046996   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:48.047057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:48.085611   58316 cri.go:89] found id: ""
	I0520 11:47:48.085639   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.085649   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:48.085659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:48.085673   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:48.159338   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:48.159359   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:48.159373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:48.238332   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:48.238362   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:48.282316   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:48.282346   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:48.333780   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:48.333811   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:50.849007   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:50.863710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:50.863766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:50.900260   58316 cri.go:89] found id: ""
	I0520 11:47:50.900281   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.900287   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:50.900294   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:50.900342   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:50.934796   58316 cri.go:89] found id: ""
	I0520 11:47:50.934817   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.934824   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:50.934830   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:50.934875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:50.972957   58316 cri.go:89] found id: ""
	I0520 11:47:50.972983   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.972992   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:50.973000   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:50.973073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:51.009354   58316 cri.go:89] found id: ""
	I0520 11:47:51.009392   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.009402   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:51.009410   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:51.009467   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:51.046268   58316 cri.go:89] found id: ""
	I0520 11:47:51.046297   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.046308   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:51.046316   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:51.046379   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:51.083513   58316 cri.go:89] found id: ""
	I0520 11:47:51.083538   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.083548   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:51.083554   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:51.083619   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:51.126736   58316 cri.go:89] found id: ""
	I0520 11:47:51.126761   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.126778   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:51.126784   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:51.126841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:51.166149   58316 cri.go:89] found id: ""
	I0520 11:47:51.166176   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.166187   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:51.166196   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:51.166207   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:51.218946   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:51.218989   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:51.233779   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:51.233813   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:51.312266   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:51.312289   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:51.312306   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:51.393884   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:51.393911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:47.056743   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.555489   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.363267   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:51.863214   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:51.385469   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:53.884960   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:55.885430   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:53.939605   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:53.952518   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:53.952585   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:53.995262   58316 cri.go:89] found id: ""
	I0520 11:47:53.995291   58316 logs.go:276] 0 containers: []
	W0520 11:47:53.995302   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:53.995309   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:53.995372   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:54.029357   58316 cri.go:89] found id: ""
	I0520 11:47:54.029379   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.029387   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:54.029393   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:54.029441   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:54.064902   58316 cri.go:89] found id: ""
	I0520 11:47:54.064937   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.064947   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:54.064954   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:54.065014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:54.102763   58316 cri.go:89] found id: ""
	I0520 11:47:54.102788   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.102795   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:54.102801   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:54.102866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:54.136345   58316 cri.go:89] found id: ""
	I0520 11:47:54.136373   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.136384   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:54.136392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:54.136456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:54.173106   58316 cri.go:89] found id: ""
	I0520 11:47:54.173135   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.173144   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:54.173151   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:54.173214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:54.209199   58316 cri.go:89] found id: ""
	I0520 11:47:54.209222   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.209229   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:54.209235   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:54.209301   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:54.247681   58316 cri.go:89] found id: ""
	I0520 11:47:54.247711   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.247722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:54.247731   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:54.247749   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:54.325604   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:54.325623   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:54.325635   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:54.405791   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:54.405832   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:54.445633   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:54.445671   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:54.495395   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:54.495424   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:52.055220   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:54.059922   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:56.554742   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:54.362355   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:56.862720   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:57.886732   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:00.387613   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:57.011083   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:57.025109   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:57.025171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:57.059268   58316 cri.go:89] found id: ""
	I0520 11:47:57.059299   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.059310   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:57.059318   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:57.059371   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:57.093744   58316 cri.go:89] found id: ""
	I0520 11:47:57.093770   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.093780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:57.093788   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:57.093854   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:57.128836   58316 cri.go:89] found id: ""
	I0520 11:47:57.128868   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.128879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:57.128886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:57.128967   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:57.163858   58316 cri.go:89] found id: ""
	I0520 11:47:57.163882   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.163890   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:57.163896   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:57.163951   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:57.198684   58316 cri.go:89] found id: ""
	I0520 11:47:57.198713   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.198724   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:57.198732   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:57.198794   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:57.235121   58316 cri.go:89] found id: ""
	I0520 11:47:57.235147   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.235155   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:57.235161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:57.235208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:57.279621   58316 cri.go:89] found id: ""
	I0520 11:47:57.279641   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.279650   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:57.279655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:57.279701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:57.316292   58316 cri.go:89] found id: ""
	I0520 11:47:57.316319   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.316329   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:57.316338   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:57.316350   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:57.330013   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:57.330037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:57.405736   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:57.405762   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:57.405776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:57.483126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:57.483160   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:57.527345   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:57.527375   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.081615   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:00.094782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:00.094841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:00.133435   58316 cri.go:89] found id: ""
	I0520 11:48:00.133470   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.133486   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:00.133492   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:00.133553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:00.169402   58316 cri.go:89] found id: ""
	I0520 11:48:00.169434   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.169445   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:00.169453   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:00.169515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:00.204702   58316 cri.go:89] found id: ""
	I0520 11:48:00.204728   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.204738   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:00.204745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:00.204803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:00.246341   58316 cri.go:89] found id: ""
	I0520 11:48:00.246373   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.246383   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:00.246391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:00.246452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:00.279937   58316 cri.go:89] found id: ""
	I0520 11:48:00.279961   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.279968   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:00.279973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:00.280021   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:00.314825   58316 cri.go:89] found id: ""
	I0520 11:48:00.314853   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.314864   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:00.314870   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:00.314937   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:00.354431   58316 cri.go:89] found id: ""
	I0520 11:48:00.354460   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.354471   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:00.354478   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:00.354538   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:00.391309   58316 cri.go:89] found id: ""
	I0520 11:48:00.391335   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.391342   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:00.391350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:00.391361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:00.471502   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:00.471535   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:00.508324   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:00.508352   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.559695   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:00.559728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:00.573677   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:00.573709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:00.648102   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:58.558508   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:01.055219   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:58.862939   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:01.362135   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:02.886809   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:04.887274   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:03.148336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:03.162792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:03.162922   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:03.202620   58316 cri.go:89] found id: ""
	I0520 11:48:03.202656   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.202667   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:03.202674   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:03.202732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:03.243434   58316 cri.go:89] found id: ""
	I0520 11:48:03.243460   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.243468   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:03.243474   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:03.243528   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:03.278517   58316 cri.go:89] found id: ""
	I0520 11:48:03.278545   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.278553   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:03.278560   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:03.278624   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:03.317253   58316 cri.go:89] found id: ""
	I0520 11:48:03.317281   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.317291   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:03.317298   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:03.317360   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:03.352394   58316 cri.go:89] found id: ""
	I0520 11:48:03.352416   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.352425   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:03.352431   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:03.352480   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:03.393016   58316 cri.go:89] found id: ""
	I0520 11:48:03.393044   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.393055   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:03.393063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:03.393134   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:03.434436   58316 cri.go:89] found id: ""
	I0520 11:48:03.434461   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.434468   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:03.434474   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:03.434531   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:03.476703   58316 cri.go:89] found id: ""
	I0520 11:48:03.476728   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.476737   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:03.476746   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:03.476761   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:03.529959   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:03.529996   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:03.544741   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:03.544770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:03.614526   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:03.614552   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:03.614568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:03.695336   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:03.695368   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:06.233916   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:06.249026   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:06.249083   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:06.288191   58316 cri.go:89] found id: ""
	I0520 11:48:06.288223   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.288233   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:06.288246   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:06.288307   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:06.328602   58316 cri.go:89] found id: ""
	I0520 11:48:06.328639   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.328647   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:06.328653   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:06.328710   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:06.373030   58316 cri.go:89] found id: ""
	I0520 11:48:06.373053   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.373062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:06.373069   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:06.373130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:06.410520   58316 cri.go:89] found id: ""
	I0520 11:48:06.410542   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.410551   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:06.410559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:06.410625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:06.448468   58316 cri.go:89] found id: ""
	I0520 11:48:06.448493   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.448505   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:06.448511   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:06.448571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:06.507441   58316 cri.go:89] found id: ""
	I0520 11:48:06.507468   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.507477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:06.507485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:06.507545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:06.547174   58316 cri.go:89] found id: ""
	I0520 11:48:06.547201   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.547212   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:06.547220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:06.547283   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:06.580692   58316 cri.go:89] found id: ""
	I0520 11:48:06.580715   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.580722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:06.580730   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:06.580741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:06.634822   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:06.634855   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:06.649107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:06.649132   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:48:03.555849   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:06.059026   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:03.363379   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:05.863603   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:07.386458   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:09.884793   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	W0520 11:48:06.717523   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:06.717548   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:06.717562   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:06.806538   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:06.806571   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:09.347195   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:09.362597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:09.362677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:09.401314   58316 cri.go:89] found id: ""
	I0520 11:48:09.401338   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.401346   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:09.401353   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:09.401411   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:09.436394   58316 cri.go:89] found id: ""
	I0520 11:48:09.436418   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.436426   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:09.436431   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:09.436484   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:09.470407   58316 cri.go:89] found id: ""
	I0520 11:48:09.470437   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.470447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:09.470462   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:09.470514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:09.507720   58316 cri.go:89] found id: ""
	I0520 11:48:09.507747   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.507758   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:09.507763   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:09.507824   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:09.548056   58316 cri.go:89] found id: ""
	I0520 11:48:09.548084   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.548093   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:09.548106   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:09.548172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:09.585187   58316 cri.go:89] found id: ""
	I0520 11:48:09.585214   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.585222   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:09.585227   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:09.585285   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:09.622695   58316 cri.go:89] found id: ""
	I0520 11:48:09.622718   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.622725   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:09.622731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:09.622796   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:09.662751   58316 cri.go:89] found id: ""
	I0520 11:48:09.662775   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.662784   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:09.662792   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:09.662804   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:09.715669   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:09.715709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:09.729301   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:09.729333   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:09.799189   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:09.799213   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:09.799228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:09.885889   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:09.885916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:08.554935   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:10.555580   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:08.362397   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:10.863960   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:12.385193   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:14.385749   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:12.431556   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:12.445470   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:12.445545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:12.482161   58316 cri.go:89] found id: ""
	I0520 11:48:12.482186   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.482195   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:12.482201   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:12.482251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:12.520051   58316 cri.go:89] found id: ""
	I0520 11:48:12.520077   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.520090   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:12.520096   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:12.520143   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:12.559856   58316 cri.go:89] found id: ""
	I0520 11:48:12.559878   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.559886   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:12.559891   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:12.559947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:12.599657   58316 cri.go:89] found id: ""
	I0520 11:48:12.599679   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.599687   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:12.599693   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:12.599739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:12.640514   58316 cri.go:89] found id: ""
	I0520 11:48:12.640539   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.640547   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:12.640552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:12.640601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:12.680337   58316 cri.go:89] found id: ""
	I0520 11:48:12.680367   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.680378   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:12.680384   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:12.680433   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:12.718580   58316 cri.go:89] found id: ""
	I0520 11:48:12.718615   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.718624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:12.718629   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:12.718677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:12.760438   58316 cri.go:89] found id: ""
	I0520 11:48:12.760463   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.760471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:12.760479   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:12.760493   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:12.816564   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:12.816599   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:12.831107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:12.831130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:12.931165   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:12.931184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:12.931199   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:13.035436   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:13.035478   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:15.578999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:15.593185   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:15.593280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:15.629644   58316 cri.go:89] found id: ""
	I0520 11:48:15.629672   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.629682   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:15.629689   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:15.629749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:15.668639   58316 cri.go:89] found id: ""
	I0520 11:48:15.668670   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.668680   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:15.668688   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:15.668752   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:15.712844   58316 cri.go:89] found id: ""
	I0520 11:48:15.712868   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.712876   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:15.712881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:15.712954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:15.748969   58316 cri.go:89] found id: ""
	I0520 11:48:15.748998   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.749008   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:15.749020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:15.749085   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:15.784625   58316 cri.go:89] found id: ""
	I0520 11:48:15.784658   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.784666   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:15.784672   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:15.784724   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:15.825288   58316 cri.go:89] found id: ""
	I0520 11:48:15.825319   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.825326   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:15.825332   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:15.825390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:15.861589   58316 cri.go:89] found id: ""
	I0520 11:48:15.861617   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.861628   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:15.861635   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:15.861691   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:15.897115   58316 cri.go:89] found id: ""
	I0520 11:48:15.897143   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.897155   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:15.897166   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:15.897188   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:15.971286   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:15.971307   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:15.971323   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:16.049567   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:16.049612   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:16.090116   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:16.090145   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:16.140151   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:16.140187   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:13.058312   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:15.556249   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:13.363188   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:15.363224   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:17.861944   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:16.884628   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:18.885335   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:20.887306   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:18.655048   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:18.671663   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:18.671720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:18.721288   58316 cri.go:89] found id: ""
	I0520 11:48:18.721315   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.721333   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:18.721340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:18.721407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:18.765025   58316 cri.go:89] found id: ""
	I0520 11:48:18.765054   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.765064   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:18.765069   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:18.765135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:18.825054   58316 cri.go:89] found id: ""
	I0520 11:48:18.825083   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.825094   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:18.825110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:18.825172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:18.861528   58316 cri.go:89] found id: ""
	I0520 11:48:18.861550   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.861560   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:18.861566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:18.861630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:18.899943   58316 cri.go:89] found id: ""
	I0520 11:48:18.899970   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.899980   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:18.899986   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:18.900063   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:18.936465   58316 cri.go:89] found id: ""
	I0520 11:48:18.936497   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.936507   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:18.936514   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:18.936570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:18.970599   58316 cri.go:89] found id: ""
	I0520 11:48:18.970636   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.970645   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:18.970653   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:18.970718   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:19.004507   58316 cri.go:89] found id: ""
	I0520 11:48:19.004538   58316 logs.go:276] 0 containers: []
	W0520 11:48:19.004547   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:19.004558   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:19.004573   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:19.057482   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:19.057521   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:19.072189   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:19.072211   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:19.148331   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:19.148356   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:19.148373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:19.228421   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:19.228456   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:18.055244   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:20.056597   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:19.862408   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:21.862760   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:23.386334   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:25.388116   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:21.771146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:21.786547   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:21.786620   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:21.829816   58316 cri.go:89] found id: ""
	I0520 11:48:21.829846   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.829858   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:21.829865   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:21.829925   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:21.867480   58316 cri.go:89] found id: ""
	I0520 11:48:21.867502   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.867510   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:21.867515   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:21.867571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:21.904737   58316 cri.go:89] found id: ""
	I0520 11:48:21.904760   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.904767   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:21.904772   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:21.904841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:21.940431   58316 cri.go:89] found id: ""
	I0520 11:48:21.940457   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.940465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:21.940482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:21.940543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:21.976054   58316 cri.go:89] found id: ""
	I0520 11:48:21.976087   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.976094   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:21.976100   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:21.976158   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:22.020035   58316 cri.go:89] found id: ""
	I0520 11:48:22.020104   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.020116   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:22.020123   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:22.020189   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:22.060981   58316 cri.go:89] found id: ""
	I0520 11:48:22.061002   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.061010   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:22.061015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:22.061059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:22.102854   58316 cri.go:89] found id: ""
	I0520 11:48:22.102875   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.102882   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:22.102892   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:22.102906   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:22.158901   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:22.158933   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.172825   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:22.172850   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:22.246602   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:22.246625   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:22.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:22.325862   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:22.325899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:24.868449   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:24.882851   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:24.882921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:24.922574   58316 cri.go:89] found id: ""
	I0520 11:48:24.922600   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.922610   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:24.922617   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:24.922677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:24.958786   58316 cri.go:89] found id: ""
	I0520 11:48:24.958817   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.958845   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:24.958853   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:24.958906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:24.993346   58316 cri.go:89] found id: ""
	I0520 11:48:24.993375   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.993384   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:24.993391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:24.993450   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:25.026840   58316 cri.go:89] found id: ""
	I0520 11:48:25.026863   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.026870   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:25.026875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:25.026920   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:25.065675   58316 cri.go:89] found id: ""
	I0520 11:48:25.065710   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.065719   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:25.065727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:25.065791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:25.100955   58316 cri.go:89] found id: ""
	I0520 11:48:25.100986   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.100997   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:25.101004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:25.101062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:25.134516   58316 cri.go:89] found id: ""
	I0520 11:48:25.134544   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.134554   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:25.134562   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:25.134626   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:25.174455   58316 cri.go:89] found id: ""
	I0520 11:48:25.174482   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.174491   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:25.174501   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:25.174527   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:25.245557   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:25.245579   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:25.245592   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:25.324197   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:25.324228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:25.362963   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:25.362995   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:25.420778   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:25.420807   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.555469   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:24.556030   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:24.362744   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:26.865098   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:27.884029   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.885238   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:27.935757   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:27.949787   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:27.949873   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:27.985142   58316 cri.go:89] found id: ""
	I0520 11:48:27.985174   58316 logs.go:276] 0 containers: []
	W0520 11:48:27.985185   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:27.985191   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:27.985251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:28.020044   58316 cri.go:89] found id: ""
	I0520 11:48:28.020074   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.020082   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:28.020088   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:28.020135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:28.057075   58316 cri.go:89] found id: ""
	I0520 11:48:28.057114   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.057125   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:28.057136   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:28.057182   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:28.093505   58316 cri.go:89] found id: ""
	I0520 11:48:28.093527   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.093534   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:28.093540   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:28.093596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:28.129402   58316 cri.go:89] found id: ""
	I0520 11:48:28.129431   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.129440   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:28.129447   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:28.129505   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:28.166558   58316 cri.go:89] found id: ""
	I0520 11:48:28.166582   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.166590   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:28.166601   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:28.166649   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:28.206833   58316 cri.go:89] found id: ""
	I0520 11:48:28.206863   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.206873   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:28.206881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:28.206943   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:28.246571   58316 cri.go:89] found id: ""
	I0520 11:48:28.246605   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.246617   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:28.246628   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:28.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:28.297849   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:28.297882   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:28.311578   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:28.311603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:28.387379   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:28.387399   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:28.387410   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:28.466506   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:28.466540   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.007382   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:31.021671   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:31.021751   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:31.059965   58316 cri.go:89] found id: ""
	I0520 11:48:31.059987   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.059996   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:31.060004   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:31.060062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:31.095664   58316 cri.go:89] found id: ""
	I0520 11:48:31.095686   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.095693   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:31.095699   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:31.095749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:31.131510   58316 cri.go:89] found id: ""
	I0520 11:48:31.131531   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.131538   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:31.131544   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:31.131596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:31.166962   58316 cri.go:89] found id: ""
	I0520 11:48:31.167000   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.167011   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:31.167018   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:31.167082   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:31.202831   58316 cri.go:89] found id: ""
	I0520 11:48:31.202856   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.202863   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:31.202868   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:31.202927   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:31.239365   58316 cri.go:89] found id: ""
	I0520 11:48:31.239397   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.239409   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:31.239417   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:31.239477   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:31.276613   58316 cri.go:89] found id: ""
	I0520 11:48:31.276641   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.276651   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:31.276658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:31.276720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:31.312008   58316 cri.go:89] found id: ""
	I0520 11:48:31.312040   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.312047   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:31.312056   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:31.312066   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:31.394355   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:31.394383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.439251   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:31.439277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:31.494313   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:31.494341   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:31.508675   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:31.508698   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:31.576022   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:27.054447   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.056075   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.556385   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.368181   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.862789   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.885337   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:33.885831   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:34.076333   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:34.089640   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:34.089704   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:34.124848   58316 cri.go:89] found id: ""
	I0520 11:48:34.124869   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.124876   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:34.124881   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:34.124948   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:34.159885   58316 cri.go:89] found id: ""
	I0520 11:48:34.159911   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.159920   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:34.159925   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:34.159988   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:34.194029   58316 cri.go:89] found id: ""
	I0520 11:48:34.194051   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.194058   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:34.194063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:34.194135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:34.231412   58316 cri.go:89] found id: ""
	I0520 11:48:34.231435   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.231442   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:34.231448   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:34.231497   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:34.268679   58316 cri.go:89] found id: ""
	I0520 11:48:34.268709   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.268721   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:34.268729   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:34.268795   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:34.313585   58316 cri.go:89] found id: ""
	I0520 11:48:34.313607   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.313613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:34.313618   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:34.313681   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:34.348685   58316 cri.go:89] found id: ""
	I0520 11:48:34.348712   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.348723   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:34.348730   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:34.348788   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:34.393438   58316 cri.go:89] found id: ""
	I0520 11:48:34.393462   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.393471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:34.393480   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:34.393495   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:34.468828   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:34.468859   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:34.468877   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:34.553796   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:34.553827   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:34.594968   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:34.594992   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:34.647814   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:34.647847   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:34.055168   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.056035   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:34.362591   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.863645   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.385081   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:38.884785   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:40.886065   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:37.161573   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:37.176133   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:37.176214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:37.214694   58316 cri.go:89] found id: ""
	I0520 11:48:37.214719   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.214729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:37.214736   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:37.214803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:37.259745   58316 cri.go:89] found id: ""
	I0520 11:48:37.259772   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.259783   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:37.259790   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:37.259852   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:37.299887   58316 cri.go:89] found id: ""
	I0520 11:48:37.299920   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.299931   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:37.299938   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:37.300003   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:37.337428   58316 cri.go:89] found id: ""
	I0520 11:48:37.337456   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.337465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:37.337472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:37.337532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:37.379635   58316 cri.go:89] found id: ""
	I0520 11:48:37.379662   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.379672   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:37.379680   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:37.379739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:37.421071   58316 cri.go:89] found id: ""
	I0520 11:48:37.421104   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.421112   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:37.421118   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:37.421164   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:37.460130   58316 cri.go:89] found id: ""
	I0520 11:48:37.460154   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.460162   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:37.460168   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:37.460225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:37.497028   58316 cri.go:89] found id: ""
	I0520 11:48:37.497053   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.497063   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:37.497074   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:37.497089   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:37.550632   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:37.550659   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:37.566172   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:37.566201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:37.645132   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:37.645150   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:37.645163   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:37.720923   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:37.720966   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:40.260330   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:40.275233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:40.275294   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:40.310450   58316 cri.go:89] found id: ""
	I0520 11:48:40.310479   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.310488   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:40.310494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:40.310543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:40.346171   58316 cri.go:89] found id: ""
	I0520 11:48:40.346200   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.346211   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:40.346217   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:40.346278   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:40.382644   58316 cri.go:89] found id: ""
	I0520 11:48:40.382666   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.382676   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:40.382684   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:40.382739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:40.422950   58316 cri.go:89] found id: ""
	I0520 11:48:40.422974   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.422984   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:40.422990   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:40.423050   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:40.458818   58316 cri.go:89] found id: ""
	I0520 11:48:40.458864   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.458876   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:40.458884   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:40.458935   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:40.496438   58316 cri.go:89] found id: ""
	I0520 11:48:40.496462   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.496471   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:40.496476   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:40.496530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:40.532432   58316 cri.go:89] found id: ""
	I0520 11:48:40.532457   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.532465   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:40.532471   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:40.532530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:40.572628   58316 cri.go:89] found id: ""
	I0520 11:48:40.572652   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.572660   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:40.572667   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:40.572678   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:40.623158   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:40.623189   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:40.636609   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:40.636633   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:40.710836   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:40.710866   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:40.710880   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:40.795386   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:40.795420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:38.556294   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:40.556701   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:39.362479   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:41.861916   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.387356   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:45.884457   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.334999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:43.348481   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:43.348553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:43.385528   58316 cri.go:89] found id: ""
	I0520 11:48:43.385554   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.385564   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:43.385571   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:43.385630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:43.424628   58316 cri.go:89] found id: ""
	I0520 11:48:43.424659   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.424676   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:43.424685   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:43.424746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:43.464043   58316 cri.go:89] found id: ""
	I0520 11:48:43.464075   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.464091   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:43.464097   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:43.464171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:43.501860   58316 cri.go:89] found id: ""
	I0520 11:48:43.501889   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.501899   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:43.501907   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:43.501970   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:43.541881   58316 cri.go:89] found id: ""
	I0520 11:48:43.541910   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.541919   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:43.541926   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:43.541992   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:43.578759   58316 cri.go:89] found id: ""
	I0520 11:48:43.578782   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.578789   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:43.578794   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:43.578842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:43.618012   58316 cri.go:89] found id: ""
	I0520 11:48:43.618042   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.618050   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:43.618058   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:43.618117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.654219   58316 cri.go:89] found id: ""
	I0520 11:48:43.654243   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.654250   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:43.654258   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:43.654273   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:43.667782   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:43.667805   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:43.740111   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:43.740138   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:43.740155   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:43.823936   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:43.823974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:43.865045   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:43.865070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.421300   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:46.434802   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:46.434878   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:46.469361   58316 cri.go:89] found id: ""
	I0520 11:48:46.469392   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.469401   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:46.469407   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:46.469460   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:46.503922   58316 cri.go:89] found id: ""
	I0520 11:48:46.503950   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.503960   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:46.503968   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:46.504025   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:46.542916   58316 cri.go:89] found id: ""
	I0520 11:48:46.542942   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.542950   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:46.542955   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:46.543011   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:46.581532   58316 cri.go:89] found id: ""
	I0520 11:48:46.581550   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.581558   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:46.581563   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:46.581623   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:46.618650   58316 cri.go:89] found id: ""
	I0520 11:48:46.618677   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.618684   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:46.618689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:46.618740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:46.655409   58316 cri.go:89] found id: ""
	I0520 11:48:46.655431   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.655439   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:46.655445   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:46.655491   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:46.690494   58316 cri.go:89] found id: ""
	I0520 11:48:46.690519   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.690526   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:46.690531   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:46.690579   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.056625   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:45.555575   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.864183   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:46.362155   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:47.885261   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:49.886656   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:46.724684   58316 cri.go:89] found id: ""
	I0520 11:48:46.724715   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.724725   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:46.724737   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:46.724752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:46.807899   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:46.807931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:46.850852   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:46.850883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.903474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:46.903502   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:46.917073   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:46.917105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:47.006606   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.507632   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:49.521080   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:49.521160   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:49.558639   58316 cri.go:89] found id: ""
	I0520 11:48:49.558671   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.558678   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:49.558684   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:49.558732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:49.595470   58316 cri.go:89] found id: ""
	I0520 11:48:49.595497   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.595506   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:49.595513   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:49.595573   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:49.630164   58316 cri.go:89] found id: ""
	I0520 11:48:49.630193   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.630204   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:49.630211   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:49.630272   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:49.664835   58316 cri.go:89] found id: ""
	I0520 11:48:49.664863   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.664873   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:49.664882   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:49.664954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:49.700428   58316 cri.go:89] found id: ""
	I0520 11:48:49.700483   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.700497   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:49.700504   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:49.700569   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:49.737184   58316 cri.go:89] found id: ""
	I0520 11:48:49.737216   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.737226   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:49.737234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:49.737303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:49.772577   58316 cri.go:89] found id: ""
	I0520 11:48:49.772601   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.772611   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:49.772617   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:49.772678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:49.806926   58316 cri.go:89] found id: ""
	I0520 11:48:49.806954   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.806966   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:49.806977   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:49.806991   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:49.861208   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:49.861242   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:49.875054   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:49.875084   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:49.949556   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.949582   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:49.949596   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:50.032181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:50.032226   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:48.055288   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:50.057220   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:48.362952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:50.363988   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.863467   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.385482   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:54.386207   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.578486   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:52.592465   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:52.592534   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:52.627795   58316 cri.go:89] found id: ""
	I0520 11:48:52.627828   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.627838   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:52.627845   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:52.627906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:52.663863   58316 cri.go:89] found id: ""
	I0520 11:48:52.663889   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.663896   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:52.663902   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:52.663956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:52.697912   58316 cri.go:89] found id: ""
	I0520 11:48:52.697937   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.697944   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:52.697950   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:52.698010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:52.751379   58316 cri.go:89] found id: ""
	I0520 11:48:52.751401   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.751408   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:52.751413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:52.751473   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:52.786683   58316 cri.go:89] found id: ""
	I0520 11:48:52.786709   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.786716   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:52.786721   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:52.786782   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:52.821720   58316 cri.go:89] found id: ""
	I0520 11:48:52.821747   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.821758   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:52.821765   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:52.821828   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:52.858569   58316 cri.go:89] found id: ""
	I0520 11:48:52.858596   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.858607   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:52.858615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:52.858671   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:52.897524   58316 cri.go:89] found id: ""
	I0520 11:48:52.897551   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.897560   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:52.897569   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:52.897584   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:52.977792   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:52.977816   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:52.977831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:53.064964   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:53.065009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:53.108613   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:53.108649   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:53.164692   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:53.164728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:55.679593   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:55.693505   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:55.693576   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:55.730013   58316 cri.go:89] found id: ""
	I0520 11:48:55.730043   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.730053   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:55.730061   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:55.730121   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:55.769739   58316 cri.go:89] found id: ""
	I0520 11:48:55.769769   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.769780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:55.769787   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:55.769842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:55.809831   58316 cri.go:89] found id: ""
	I0520 11:48:55.809867   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.809878   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:55.809886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:55.809946   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:55.843487   58316 cri.go:89] found id: ""
	I0520 11:48:55.843513   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.843521   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:55.843527   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:55.843586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:55.883327   58316 cri.go:89] found id: ""
	I0520 11:48:55.883352   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.883362   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:55.883372   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:55.883430   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:55.921507   58316 cri.go:89] found id: ""
	I0520 11:48:55.921534   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.921544   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:55.921552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:55.921612   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:55.957042   58316 cri.go:89] found id: ""
	I0520 11:48:55.957066   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.957073   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:55.957079   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:55.957141   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:55.990871   58316 cri.go:89] found id: ""
	I0520 11:48:55.990899   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.990907   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:55.990916   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:55.990931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:56.041345   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:56.041373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:56.055628   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:56.055653   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:56.128845   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:56.128867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:56.128883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:56.218181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:56.218215   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:52.554824   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:54.557133   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:55.362033   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:57.362624   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:56.885083   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:58.885662   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:58.760650   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:58.774991   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:58.775048   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:58.817777   58316 cri.go:89] found id: ""
	I0520 11:48:58.817807   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.817819   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:58.817826   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:58.817885   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:58.855909   58316 cri.go:89] found id: ""
	I0520 11:48:58.855934   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.855942   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:58.855949   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:58.856010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:58.893369   58316 cri.go:89] found id: ""
	I0520 11:48:58.893396   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.893405   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:58.893413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:58.893475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:58.929975   58316 cri.go:89] found id: ""
	I0520 11:48:58.930001   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.930010   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:58.930017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:58.930067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:58.963865   58316 cri.go:89] found id: ""
	I0520 11:48:58.963890   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.963898   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:58.963904   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:58.963954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:59.000604   58316 cri.go:89] found id: ""
	I0520 11:48:59.000624   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.000631   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:59.000637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:59.000685   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:59.037408   58316 cri.go:89] found id: ""
	I0520 11:48:59.037433   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.037442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:59.037452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:59.037512   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:59.073087   58316 cri.go:89] found id: ""
	I0520 11:48:59.073109   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.073118   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:59.073126   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:59.073142   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:59.124885   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:59.124916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:59.154020   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:59.154047   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:59.277149   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:59.277199   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:59.277216   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:59.363126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:59.363157   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:57.055207   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:59.055409   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.554795   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:59.362869   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.363107   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.385471   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:03.386313   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:05.386387   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.906903   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:01.923346   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:01.923407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:01.961612   58316 cri.go:89] found id: ""
	I0520 11:49:01.961635   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.961643   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:01.961648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:01.961702   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:01.997803   58316 cri.go:89] found id: ""
	I0520 11:49:01.997830   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.997839   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:01.997844   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:01.997894   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:02.032142   58316 cri.go:89] found id: ""
	I0520 11:49:02.032169   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.032177   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:02.032182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:02.032226   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:02.070641   58316 cri.go:89] found id: ""
	I0520 11:49:02.070664   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.070671   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:02.070676   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:02.070729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:02.107270   58316 cri.go:89] found id: ""
	I0520 11:49:02.107296   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.107302   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:02.107308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:02.107356   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:02.144676   58316 cri.go:89] found id: ""
	I0520 11:49:02.144702   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.144709   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:02.144717   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:02.144798   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:02.184382   58316 cri.go:89] found id: ""
	I0520 11:49:02.184403   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.184410   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:02.184416   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:02.184466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:02.223110   58316 cri.go:89] found id: ""
	I0520 11:49:02.223140   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.223152   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:02.223164   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:02.223181   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:02.236272   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:02.236305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:02.312046   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:02.312065   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:02.312076   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:02.397284   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:02.397313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:02.438720   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:02.438755   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:04.988565   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:05.005842   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:05.005906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:05.071468   58316 cri.go:89] found id: ""
	I0520 11:49:05.071496   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.071505   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:05.071515   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:05.071574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:05.125767   58316 cri.go:89] found id: ""
	I0520 11:49:05.125793   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.125801   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:05.125806   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:05.125868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:05.162651   58316 cri.go:89] found id: ""
	I0520 11:49:05.162676   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.162684   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:05.162689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:05.162746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:05.205140   58316 cri.go:89] found id: ""
	I0520 11:49:05.205168   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.205176   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:05.205182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:05.205245   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:05.245187   58316 cri.go:89] found id: ""
	I0520 11:49:05.245209   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.245216   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:05.245221   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:05.245280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:05.287698   58316 cri.go:89] found id: ""
	I0520 11:49:05.287728   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.287739   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:05.287746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:05.287809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:05.325607   58316 cri.go:89] found id: ""
	I0520 11:49:05.325637   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.325649   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:05.325655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:05.325716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:05.360662   58316 cri.go:89] found id: ""
	I0520 11:49:05.360690   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.360700   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:05.360711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:05.360725   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:05.402342   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:05.402376   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:05.454403   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:05.454436   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:05.468882   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:05.468912   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:05.546520   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:05.546546   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:05.546567   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:03.555624   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:06.055651   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:03.862582   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:05.862917   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:07.863213   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:07.886494   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.384808   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:08.128785   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:08.144963   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:08.145053   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:08.190121   58316 cri.go:89] found id: ""
	I0520 11:49:08.190154   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.190165   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:08.190174   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:08.190253   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:08.227310   58316 cri.go:89] found id: ""
	I0520 11:49:08.227334   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.227341   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:08.227346   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:08.227390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:08.262606   58316 cri.go:89] found id: ""
	I0520 11:49:08.262631   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.262639   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:08.262646   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:08.262706   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:08.297061   58316 cri.go:89] found id: ""
	I0520 11:49:08.297092   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.297103   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:08.297110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:08.297170   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:08.337356   58316 cri.go:89] found id: ""
	I0520 11:49:08.337394   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.337410   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:08.337418   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:08.337481   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:08.372576   58316 cri.go:89] found id: ""
	I0520 11:49:08.372603   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.372613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:08.372620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:08.372677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:08.409394   58316 cri.go:89] found id: ""
	I0520 11:49:08.409427   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.409442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:08.409449   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:08.409562   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:08.445119   58316 cri.go:89] found id: ""
	I0520 11:49:08.445145   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.445153   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:08.445161   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:08.445171   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:08.457873   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:08.457899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:08.530873   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:08.530902   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:08.530920   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.609655   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:08.609695   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:08.656568   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:08.656601   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.209318   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:11.223883   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:11.223958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:11.260006   58316 cri.go:89] found id: ""
	I0520 11:49:11.260047   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.260060   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:11.260068   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:11.260129   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:11.296348   58316 cri.go:89] found id: ""
	I0520 11:49:11.296375   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.296385   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:11.296392   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:11.296452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:11.336546   58316 cri.go:89] found id: ""
	I0520 11:49:11.336573   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.336583   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:11.336590   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:11.336652   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:11.373216   58316 cri.go:89] found id: ""
	I0520 11:49:11.373244   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.373254   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:11.373261   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:11.373320   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:11.411555   58316 cri.go:89] found id: ""
	I0520 11:49:11.411586   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.411596   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:11.411604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:11.411663   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:11.446390   58316 cri.go:89] found id: ""
	I0520 11:49:11.446419   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.446428   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:11.446433   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:11.446489   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:11.481584   58316 cri.go:89] found id: ""
	I0520 11:49:11.481613   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.481624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:11.481637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:11.481701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:11.518092   58316 cri.go:89] found id: ""
	I0520 11:49:11.518121   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.518131   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:11.518143   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:11.518158   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.569824   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:11.569859   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:11.583600   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:11.583620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:11.663761   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:11.663799   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:11.663817   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.055875   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.056882   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.362663   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:12.863246   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:12.388256   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:14.887822   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:11.743430   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:11.743462   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:14.289940   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:14.307574   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:14.307645   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:14.349602   58316 cri.go:89] found id: ""
	I0520 11:49:14.349628   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.349637   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:14.349644   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:14.349701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:14.391484   58316 cri.go:89] found id: ""
	I0520 11:49:14.391508   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.391516   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:14.391521   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:14.391581   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:14.437859   58316 cri.go:89] found id: ""
	I0520 11:49:14.437884   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.437894   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:14.437902   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:14.437965   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:14.474353   58316 cri.go:89] found id: ""
	I0520 11:49:14.474379   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.474387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:14.474392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:14.474439   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:14.517430   58316 cri.go:89] found id: ""
	I0520 11:49:14.517459   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.517466   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:14.517472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:14.517523   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:14.552166   58316 cri.go:89] found id: ""
	I0520 11:49:14.552196   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.552207   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:14.552220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:14.552280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:14.589295   58316 cri.go:89] found id: ""
	I0520 11:49:14.589322   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.589330   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:14.589336   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:14.589396   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:14.626221   58316 cri.go:89] found id: ""
	I0520 11:49:14.626244   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.626251   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:14.626259   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:14.626271   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:14.681867   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:14.681899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:14.695328   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:14.695355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:14.773077   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:14.773102   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:14.773118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:14.857344   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:14.857378   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:12.557989   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:15.056569   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:14.864153   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:17.362222   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:16.888363   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.385523   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:17.402259   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:17.415895   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:17.415956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:17.458403   58316 cri.go:89] found id: ""
	I0520 11:49:17.458432   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.458443   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:17.458450   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:17.458511   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:17.495603   58316 cri.go:89] found id: ""
	I0520 11:49:17.495633   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.495650   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:17.495657   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:17.495716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:17.535416   58316 cri.go:89] found id: ""
	I0520 11:49:17.535440   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.535448   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:17.535452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:17.535515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:17.574842   58316 cri.go:89] found id: ""
	I0520 11:49:17.574868   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.574875   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:17.574880   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:17.574933   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:17.608744   58316 cri.go:89] found id: ""
	I0520 11:49:17.608773   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.608783   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:17.608791   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:17.608864   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:17.645101   58316 cri.go:89] found id: ""
	I0520 11:49:17.645126   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.645134   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:17.645140   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:17.645199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:17.682620   58316 cri.go:89] found id: ""
	I0520 11:49:17.682651   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.682661   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:17.682669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:17.682725   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:17.718392   58316 cri.go:89] found id: ""
	I0520 11:49:17.718414   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.718422   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:17.718430   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:17.718441   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:17.772103   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:17.772138   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:17.786071   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:17.786102   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:17.859596   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:17.859617   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:17.859634   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:17.938723   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:17.938754   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:20.480539   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:20.493677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:20.493745   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:20.528213   58316 cri.go:89] found id: ""
	I0520 11:49:20.528241   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.528251   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:20.528258   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:20.528344   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:20.564821   58316 cri.go:89] found id: ""
	I0520 11:49:20.564844   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.564853   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:20.564860   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:20.564921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:20.599015   58316 cri.go:89] found id: ""
	I0520 11:49:20.599051   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.599063   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:20.599070   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:20.599125   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:20.637115   58316 cri.go:89] found id: ""
	I0520 11:49:20.637160   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.637169   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:20.637175   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:20.637221   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:20.672699   58316 cri.go:89] found id: ""
	I0520 11:49:20.672728   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.672738   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:20.672745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:20.672809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:20.709278   58316 cri.go:89] found id: ""
	I0520 11:49:20.709309   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.709321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:20.709327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:20.709394   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:20.743159   58316 cri.go:89] found id: ""
	I0520 11:49:20.743182   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.743188   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:20.743194   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:20.743239   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:20.780848   58316 cri.go:89] found id: ""
	I0520 11:49:20.780878   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.780890   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:20.780902   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:20.780917   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:20.834178   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:20.834206   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:20.848026   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:20.848049   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:20.926067   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:20.926088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:20.926103   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:21.003153   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:21.003184   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:17.056820   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.556351   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.362408   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:21.861822   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:21.386338   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.390122   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:25.885748   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.545830   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:23.560697   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:23.560781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:23.595691   58316 cri.go:89] found id: ""
	I0520 11:49:23.595719   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.595729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:23.595737   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:23.595805   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:23.634526   58316 cri.go:89] found id: ""
	I0520 11:49:23.634555   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.634565   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:23.634573   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:23.634631   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:23.670101   58316 cri.go:89] found id: ""
	I0520 11:49:23.670144   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.670155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:23.670169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:23.670228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:23.704352   58316 cri.go:89] found id: ""
	I0520 11:49:23.704376   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.704384   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:23.704389   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:23.704435   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:23.745633   58316 cri.go:89] found id: ""
	I0520 11:49:23.745657   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.745664   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:23.745669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:23.745729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:23.784631   58316 cri.go:89] found id: ""
	I0520 11:49:23.784659   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.784671   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:23.784678   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:23.784742   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:23.819884   58316 cri.go:89] found id: ""
	I0520 11:49:23.819911   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.819918   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:23.819924   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:23.819976   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:23.857396   58316 cri.go:89] found id: ""
	I0520 11:49:23.857427   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.857436   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:23.857444   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:23.857455   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:23.931405   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:23.931428   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:23.931445   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:24.004717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:24.004752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:24.047465   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:24.047499   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:24.098918   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:24.098950   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:26.613820   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:26.627313   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:26.627386   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:26.661418   58316 cri.go:89] found id: ""
	I0520 11:49:26.661450   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.661460   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:26.661467   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:26.661532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:22.057526   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:24.554983   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.864449   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:26.363217   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:28.384733   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:30.884703   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:26.697938   58316 cri.go:89] found id: ""
	I0520 11:49:26.699437   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.699450   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:26.699456   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:26.699506   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:26.735017   58316 cri.go:89] found id: ""
	I0520 11:49:26.735053   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.735062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:26.735066   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:26.735115   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:26.771318   58316 cri.go:89] found id: ""
	I0520 11:49:26.771345   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.771355   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:26.771360   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:26.771410   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:26.806365   58316 cri.go:89] found id: ""
	I0520 11:49:26.806396   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.806408   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:26.806415   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:26.806468   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:26.844289   58316 cri.go:89] found id: ""
	I0520 11:49:26.844313   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.844321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:26.844327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:26.844373   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:26.883496   58316 cri.go:89] found id: ""
	I0520 11:49:26.883523   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.883534   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:26.883541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:26.883600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:26.918427   58316 cri.go:89] found id: ""
	I0520 11:49:26.918451   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.918458   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:26.918465   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:26.918477   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:26.986423   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:26.986444   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:26.986457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:27.066909   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:27.066938   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.110874   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:27.110911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:27.167481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:27.167520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:29.682559   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:29.695460   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:29.695520   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:29.731189   58316 cri.go:89] found id: ""
	I0520 11:49:29.731218   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.731230   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:29.731238   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:29.731297   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:29.766067   58316 cri.go:89] found id: ""
	I0520 11:49:29.766109   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.766120   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:29.766131   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:29.766200   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:29.800840   58316 cri.go:89] found id: ""
	I0520 11:49:29.800868   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.800879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:29.800886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:29.800959   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:29.838971   58316 cri.go:89] found id: ""
	I0520 11:49:29.838999   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.839009   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:29.839017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:29.839078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:29.873954   58316 cri.go:89] found id: ""
	I0520 11:49:29.873977   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.873987   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:29.873994   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:29.874051   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:29.912574   58316 cri.go:89] found id: ""
	I0520 11:49:29.912602   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.912612   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:29.912620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:29.912687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:29.945949   58316 cri.go:89] found id: ""
	I0520 11:49:29.945980   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.945989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:29.945995   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:29.946043   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:29.979804   58316 cri.go:89] found id: ""
	I0520 11:49:29.979830   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.979847   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:29.979856   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:29.979868   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:30.027811   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:30.027844   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:30.043082   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:30.043115   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:30.114796   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:30.114827   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:30.114842   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:30.196523   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:30.196561   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.056582   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:29.057033   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:31.554824   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:28.862736   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:31.362469   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:32.885757   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.385655   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:32.737682   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:32.750903   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:32.750979   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:32.786874   58316 cri.go:89] found id: ""
	I0520 11:49:32.786897   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.786907   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:32.786915   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:32.786968   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:32.821758   58316 cri.go:89] found id: ""
	I0520 11:49:32.821783   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.821794   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:32.821800   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:32.821855   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:32.857680   58316 cri.go:89] found id: ""
	I0520 11:49:32.857700   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.857707   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:32.857712   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:32.857756   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:32.897970   58316 cri.go:89] found id: ""
	I0520 11:49:32.897990   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.897999   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:32.898004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:32.898059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:32.932971   58316 cri.go:89] found id: ""
	I0520 11:49:32.932999   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.933007   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:32.933015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:32.933067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:32.968103   58316 cri.go:89] found id: ""
	I0520 11:49:32.968132   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.968140   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:32.968147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:32.968199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:33.006101   58316 cri.go:89] found id: ""
	I0520 11:49:33.006125   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.006132   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:33.006137   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:33.006184   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:33.043611   58316 cri.go:89] found id: ""
	I0520 11:49:33.043639   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.043650   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:33.043659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:33.043677   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:33.119735   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.119758   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:33.119773   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:33.199361   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:33.199408   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:33.263421   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:33.263452   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:33.315644   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:33.315674   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:35.831034   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:35.845272   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:35.845343   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:35.882951   58316 cri.go:89] found id: ""
	I0520 11:49:35.882980   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.882988   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:35.882994   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:35.883042   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:35.919528   58316 cri.go:89] found id: ""
	I0520 11:49:35.919555   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.919564   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:35.919569   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:35.919618   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:35.957880   58316 cri.go:89] found id: ""
	I0520 11:49:35.957904   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.957912   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:35.957918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:35.957964   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:35.995267   58316 cri.go:89] found id: ""
	I0520 11:49:35.995291   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.995299   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:35.995308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:35.995367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:36.032979   58316 cri.go:89] found id: ""
	I0520 11:49:36.033011   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.033022   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:36.033030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:36.033092   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:36.071270   58316 cri.go:89] found id: ""
	I0520 11:49:36.071293   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.071303   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:36.071311   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:36.071434   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:36.105983   58316 cri.go:89] found id: ""
	I0520 11:49:36.106014   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.106024   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:36.106032   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:36.106094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:36.142978   58316 cri.go:89] found id: ""
	I0520 11:49:36.143006   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.143026   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:36.143037   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:36.143061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:36.227019   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:36.227055   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:36.269608   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:36.269646   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:36.321974   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:36.322009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:36.339332   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:36.339361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:36.417118   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.555033   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.555749   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:33.363377   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.863335   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:37.885668   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.385866   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:38.918143   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:38.933253   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:38.933325   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:38.988894   58316 cri.go:89] found id: ""
	I0520 11:49:38.988940   58316 logs.go:276] 0 containers: []
	W0520 11:49:38.988952   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:38.988961   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:38.989030   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:39.025902   58316 cri.go:89] found id: ""
	I0520 11:49:39.025928   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.025938   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:39.025945   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:39.026007   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:39.061920   58316 cri.go:89] found id: ""
	I0520 11:49:39.061945   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.061954   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:39.061961   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:39.062018   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:39.098125   58316 cri.go:89] found id: ""
	I0520 11:49:39.098152   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.098162   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:39.098169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:39.098229   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:39.139286   58316 cri.go:89] found id: ""
	I0520 11:49:39.139318   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.139328   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:39.139335   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:39.139404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:39.174437   58316 cri.go:89] found id: ""
	I0520 11:49:39.174468   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.174477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:39.174485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:39.174551   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:39.216176   58316 cri.go:89] found id: ""
	I0520 11:49:39.216211   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.216221   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:39.216228   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:39.216290   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:39.254213   58316 cri.go:89] found id: ""
	I0520 11:49:39.254235   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.254243   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:39.254252   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:39.254263   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:39.334325   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:39.334350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:39.334367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:39.411799   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:39.411830   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:39.455093   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:39.455118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:39.507557   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:39.507594   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:37.559287   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.055915   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:38.362779   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.862212   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.862459   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.385950   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:44.387966   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.021921   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:42.035757   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:42.035829   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:42.072072   58316 cri.go:89] found id: ""
	I0520 11:49:42.072103   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.072112   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:42.072118   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:42.072180   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:42.108038   58316 cri.go:89] found id: ""
	I0520 11:49:42.108061   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.108068   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:42.108074   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:42.108130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:42.146464   58316 cri.go:89] found id: ""
	I0520 11:49:42.146492   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.146503   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:42.146510   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:42.146578   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:42.182566   58316 cri.go:89] found id: ""
	I0520 11:49:42.182598   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.182608   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:42.182615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:42.182688   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:42.220159   58316 cri.go:89] found id: ""
	I0520 11:49:42.220191   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.220201   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:42.220209   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:42.220268   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:42.255952   58316 cri.go:89] found id: ""
	I0520 11:49:42.255975   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.255982   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:42.255988   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:42.256036   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:42.294347   58316 cri.go:89] found id: ""
	I0520 11:49:42.294376   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.294387   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:42.294394   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:42.294456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:42.332019   58316 cri.go:89] found id: ""
	I0520 11:49:42.332043   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.332050   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:42.332058   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:42.332074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.408479   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:42.408520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:42.445579   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:42.445620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:42.496278   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:42.496312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:42.510258   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:42.510292   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:42.582179   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.082674   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:45.096690   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:45.096749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:45.132031   58316 cri.go:89] found id: ""
	I0520 11:49:45.132054   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.132068   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:45.132074   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:45.132126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:45.167575   58316 cri.go:89] found id: ""
	I0520 11:49:45.167601   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.167612   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:45.167619   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:45.167679   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:45.202021   58316 cri.go:89] found id: ""
	I0520 11:49:45.202047   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.202054   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:45.202059   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:45.202113   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:45.236630   58316 cri.go:89] found id: ""
	I0520 11:49:45.236659   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.236669   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:45.236677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:45.236740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:45.272509   58316 cri.go:89] found id: ""
	I0520 11:49:45.272536   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.272543   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:45.272549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:45.272606   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:45.308221   58316 cri.go:89] found id: ""
	I0520 11:49:45.308242   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.308250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:45.308256   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:45.308298   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:45.344202   58316 cri.go:89] found id: ""
	I0520 11:49:45.344226   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.344233   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:45.344239   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:45.344299   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:45.392276   58316 cri.go:89] found id: ""
	I0520 11:49:45.392304   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.392315   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:45.392325   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:45.392336   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:45.451325   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:45.451355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:45.509835   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:45.509867   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:45.523790   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:45.523815   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:45.599837   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.599867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:45.599886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.056415   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:44.555806   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:45.363614   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:47.863671   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:46.884916   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:48.885117   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:48.175148   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:48.188998   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:48.189102   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:48.225492   58316 cri.go:89] found id: ""
	I0520 11:49:48.225528   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.225538   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:48.225548   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:48.225611   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:48.259197   58316 cri.go:89] found id: ""
	I0520 11:49:48.259221   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.259231   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:48.259237   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:48.259296   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:48.298760   58316 cri.go:89] found id: ""
	I0520 11:49:48.298786   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.298796   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:48.298803   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:48.298876   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:48.339188   58316 cri.go:89] found id: ""
	I0520 11:49:48.339217   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.339227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:48.339234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:48.339302   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:48.378659   58316 cri.go:89] found id: ""
	I0520 11:49:48.378689   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.378699   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:48.378706   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:48.378763   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:48.417637   58316 cri.go:89] found id: ""
	I0520 11:49:48.417664   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.417674   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:48.417681   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:48.417757   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:48.461412   58316 cri.go:89] found id: ""
	I0520 11:49:48.461439   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.461450   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:48.461457   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:48.461521   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:48.502919   58316 cri.go:89] found id: ""
	I0520 11:49:48.502942   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.502949   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:48.502958   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:48.502974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:48.557007   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:48.557037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:48.571457   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:48.571485   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:48.642339   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:48.642362   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:48.642379   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:48.720711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:48.720750   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:51.262685   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:51.276276   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:51.276352   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:51.313139   58316 cri.go:89] found id: ""
	I0520 11:49:51.313169   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.313178   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:51.313184   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:51.313233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:51.349742   58316 cri.go:89] found id: ""
	I0520 11:49:51.349771   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.349781   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:51.349789   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:51.349847   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:51.388819   58316 cri.go:89] found id: ""
	I0520 11:49:51.388856   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.388868   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:51.388875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:51.388958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:51.424734   58316 cri.go:89] found id: ""
	I0520 11:49:51.424769   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.424779   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:51.424789   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:51.424867   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:51.464135   58316 cri.go:89] found id: ""
	I0520 11:49:51.464163   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.464172   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:51.464178   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:51.464241   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:51.505948   58316 cri.go:89] found id: ""
	I0520 11:49:51.505974   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.505984   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:51.505992   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:51.506057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:51.542350   58316 cri.go:89] found id: ""
	I0520 11:49:51.542378   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.542389   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:51.542397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:51.542461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:51.577991   58316 cri.go:89] found id: ""
	I0520 11:49:51.578022   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.578031   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:51.578043   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:51.578058   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:51.630392   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:51.630427   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:51.645173   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:51.645198   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:49:47.056668   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:49.554453   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:51.555691   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:50.363307   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:52.862785   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:51.386280   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:53.386860   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:55.886047   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	W0520 11:49:51.715271   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:51.715297   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:51.715312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:51.798792   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:51.798831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.343631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:54.358716   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:54.358791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:54.394117   58316 cri.go:89] found id: ""
	I0520 11:49:54.394145   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.394154   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:54.394162   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:54.394219   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:54.435611   58316 cri.go:89] found id: ""
	I0520 11:49:54.435636   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.435644   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:54.435652   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:54.435711   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:54.469438   58316 cri.go:89] found id: ""
	I0520 11:49:54.469464   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.469474   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:54.469482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:54.469544   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:54.508113   58316 cri.go:89] found id: ""
	I0520 11:49:54.508143   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.508154   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:54.508161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:54.508223   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:54.544504   58316 cri.go:89] found id: ""
	I0520 11:49:54.544528   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.544536   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:54.544541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:54.544598   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:54.584691   58316 cri.go:89] found id: ""
	I0520 11:49:54.584725   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.584735   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:54.584743   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:54.584804   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:54.618505   58316 cri.go:89] found id: ""
	I0520 11:49:54.618525   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.618532   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:54.618538   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:54.618589   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:54.652670   58316 cri.go:89] found id: ""
	I0520 11:49:54.652701   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.652711   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:54.652721   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:54.652741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.692782   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:54.692808   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:54.746300   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:54.746340   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:54.760324   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:54.760357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:54.835091   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:54.835118   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:54.835130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:54.055131   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:56.055386   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:54.863545   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:57.363023   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:58.385457   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:00.385971   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:57.422176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:57.437369   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:57.437429   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:57.472083   58316 cri.go:89] found id: ""
	I0520 11:49:57.472119   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.472130   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:57.472137   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:57.472197   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:57.507307   58316 cri.go:89] found id: ""
	I0520 11:49:57.507332   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.507339   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:57.507345   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:57.507405   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:57.548112   58316 cri.go:89] found id: ""
	I0520 11:49:57.548144   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.548155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:57.548162   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:57.548227   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:57.585751   58316 cri.go:89] found id: ""
	I0520 11:49:57.585790   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.585801   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:57.585809   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:57.585879   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:57.626577   58316 cri.go:89] found id: ""
	I0520 11:49:57.626609   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.626620   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:57.626628   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:57.626687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:57.662402   58316 cri.go:89] found id: ""
	I0520 11:49:57.662429   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.662436   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:57.662441   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:57.662503   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:57.695393   58316 cri.go:89] found id: ""
	I0520 11:49:57.695417   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.695424   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:57.695429   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:57.695487   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:57.730011   58316 cri.go:89] found id: ""
	I0520 11:49:57.730039   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.730049   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:57.730059   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:57.730070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:57.807727   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:57.807763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:57.853562   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:57.853595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:57.909322   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:57.909357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:57.923277   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:57.923305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:57.994858   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.495176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:00.508821   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:00.508889   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:00.543001   58316 cri.go:89] found id: ""
	I0520 11:50:00.543028   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.543045   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:00.543057   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:00.543117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:00.582547   58316 cri.go:89] found id: ""
	I0520 11:50:00.582574   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.582590   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:00.582598   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:00.582659   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:00.617962   58316 cri.go:89] found id: ""
	I0520 11:50:00.617993   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.618003   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:00.618011   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:00.618073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:00.655065   58316 cri.go:89] found id: ""
	I0520 11:50:00.655099   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.655110   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:00.655117   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:00.655177   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:00.689497   58316 cri.go:89] found id: ""
	I0520 11:50:00.689521   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.689528   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:00.689533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:00.689586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:00.723710   58316 cri.go:89] found id: ""
	I0520 11:50:00.723743   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.723754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:00.723761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:00.723821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:00.757900   58316 cri.go:89] found id: ""
	I0520 11:50:00.757943   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.757950   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:00.757956   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:00.758014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:00.792196   58316 cri.go:89] found id: ""
	I0520 11:50:00.792223   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.792232   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:00.792242   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:00.792257   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:00.845759   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:00.845791   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:00.860633   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:00.860658   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:00.936030   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.936050   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:00.936061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:01.011562   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:01.011602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:58.554785   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:00.555726   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:59.365132   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:01.863538   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:02.884908   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:05.384186   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:03.554533   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:03.568017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:03.568086   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:03.605583   58316 cri.go:89] found id: ""
	I0520 11:50:03.605610   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.605619   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:03.605626   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:03.605683   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.640875   58316 cri.go:89] found id: ""
	I0520 11:50:03.640905   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.640915   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:03.640939   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:03.641004   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:03.676064   58316 cri.go:89] found id: ""
	I0520 11:50:03.676088   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.676096   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:03.676103   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:03.676155   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:03.710791   58316 cri.go:89] found id: ""
	I0520 11:50:03.710815   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.710823   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:03.710828   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:03.710883   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:03.746348   58316 cri.go:89] found id: ""
	I0520 11:50:03.746375   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.746386   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:03.746392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:03.746451   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:03.782220   58316 cri.go:89] found id: ""
	I0520 11:50:03.782244   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.782253   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:03.782259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:03.782319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:03.819678   58316 cri.go:89] found id: ""
	I0520 11:50:03.819712   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.819724   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:03.819731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:03.819789   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:03.856137   58316 cri.go:89] found id: ""
	I0520 11:50:03.856163   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.856172   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:03.856180   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:03.856191   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:03.914360   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:03.914391   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:03.929401   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:03.929431   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:03.999763   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:03.999882   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:03.999905   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:04.076921   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:04.076971   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:06.623577   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:06.636265   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:06.636329   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:06.670653   58316 cri.go:89] found id: ""
	I0520 11:50:06.670682   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.670692   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:06.670698   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:06.670790   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.054799   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:05.056653   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:03.865619   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:06.362355   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:07.387760   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:09.885607   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:06.703498   58316 cri.go:89] found id: ""
	I0520 11:50:06.703528   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.703537   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:06.703543   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:06.703600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:06.736579   58316 cri.go:89] found id: ""
	I0520 11:50:06.736606   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.736615   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:06.736620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:06.736687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:06.774584   58316 cri.go:89] found id: ""
	I0520 11:50:06.774608   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.774615   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:06.774625   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:06.774682   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:06.810565   58316 cri.go:89] found id: ""
	I0520 11:50:06.810591   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.810599   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:06.810604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:06.810666   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:06.845063   58316 cri.go:89] found id: ""
	I0520 11:50:06.845101   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.845113   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:06.845121   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:06.845208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:06.884560   58316 cri.go:89] found id: ""
	I0520 11:50:06.884585   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.884592   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:06.884597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:06.884650   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:06.923422   58316 cri.go:89] found id: ""
	I0520 11:50:06.923450   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.923461   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:06.923474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:06.923490   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:06.937831   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:06.937856   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:07.008877   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.008897   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:07.008907   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:07.091174   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:07.091208   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:07.128889   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:07.128922   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:09.686881   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:09.700771   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:09.700842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:09.737129   58316 cri.go:89] found id: ""
	I0520 11:50:09.737156   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.737166   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:09.737172   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:09.737233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:09.772130   58316 cri.go:89] found id: ""
	I0520 11:50:09.772164   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.772175   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:09.772182   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:09.772251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:09.811033   58316 cri.go:89] found id: ""
	I0520 11:50:09.811059   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.811066   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:09.811072   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:09.811127   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:09.849809   58316 cri.go:89] found id: ""
	I0520 11:50:09.849833   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.849841   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:09.849849   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:09.849911   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:09.888216   58316 cri.go:89] found id: ""
	I0520 11:50:09.888241   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.888251   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:09.888258   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:09.888319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:09.925704   58316 cri.go:89] found id: ""
	I0520 11:50:09.925733   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.925742   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:09.925749   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:09.925807   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:09.961823   58316 cri.go:89] found id: ""
	I0520 11:50:09.961846   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.961853   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:09.961858   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:09.961913   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:09.998256   58316 cri.go:89] found id: ""
	I0520 11:50:09.998280   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.998288   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:09.998304   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:09.998319   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:10.077998   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:10.078041   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:10.116358   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:10.116383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:10.169638   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:10.169670   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:10.184056   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:10.184082   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:10.256798   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.555902   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:09.558035   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:08.366901   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:10.862545   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:12.862664   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:12.757663   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:12.772331   58316 kubeadm.go:591] duration metric: took 4m3.848225092s to restartPrimaryControlPlane
	W0520 11:50:12.772412   58316 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:50:12.772450   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:50:13.531871   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:50:13.547475   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:50:13.558690   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:50:13.568708   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:50:13.568730   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:50:13.568772   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:50:13.579016   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:50:13.579078   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:50:13.590821   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:50:13.600567   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:50:13.600614   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:50:13.610747   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.619942   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:50:13.619981   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.629831   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:50:13.638940   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:50:13.638991   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:50:13.648962   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:50:13.714817   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:50:13.714873   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:50:13.872737   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:50:13.872919   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:50:13.873055   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:50:14.078057   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:50:14.080202   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:50:14.080339   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:50:14.080431   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:50:14.080563   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:50:14.080660   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:50:14.080763   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:50:14.080842   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:50:14.081017   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:50:14.081620   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:50:14.081988   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:50:14.082494   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:50:14.082552   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:50:14.082602   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:50:14.157464   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:50:14.556498   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:50:14.754304   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:50:15.061126   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:50:15.079978   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:50:15.081069   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:50:15.081248   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:50:15.222910   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:50:12.385235   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.886286   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:15.225078   58316 out.go:204]   - Booting up control plane ...
	I0520 11:50:15.225186   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:50:15.232621   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:50:15.233678   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:50:15.234394   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:50:15.236632   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:50:12.055821   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.556030   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.865340   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.362603   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.388804   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.885030   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.056232   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.555242   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.555339   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.363446   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.861989   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.886042   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:24.384353   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:23.555974   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:26.054899   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:23.862506   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:25.862952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:26.384558   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.384818   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.387241   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.055327   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.055628   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.365187   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.862199   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.863892   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.885393   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.384549   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.555087   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.055509   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.362103   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.362430   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.386206   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.386258   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.055828   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.554574   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.554821   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.862750   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.862973   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.885277   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:43.886162   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:43.555316   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:45.556417   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:44.364091   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:46.364286   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:46.385011   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.385754   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:50.387956   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.055237   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:50.555567   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.362404   58398 pod_ready.go:81] duration metric: took 4m0.006145654s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	E0520 11:50:48.362424   58398 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:50:48.362431   58398 pod_ready.go:38] duration metric: took 4m3.605442277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:50:48.362444   58398 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:50:48.362469   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:48.362528   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:48.422343   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:48.422361   58398 cri.go:89] found id: ""
	I0520 11:50:48.422369   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:48.422428   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.427448   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:48.427512   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:48.477327   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:48.477353   58398 cri.go:89] found id: ""
	I0520 11:50:48.477361   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:48.477418   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.482113   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:48.482161   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:48.525019   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:48.525039   58398 cri.go:89] found id: ""
	I0520 11:50:48.525046   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:48.525087   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.529544   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:48.529616   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:48.571031   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:48.571055   58398 cri.go:89] found id: ""
	I0520 11:50:48.571065   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:48.571117   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.575601   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:48.575653   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:48.617086   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:48.617111   58398 cri.go:89] found id: ""
	I0520 11:50:48.617120   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:48.617174   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.623512   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:48.623585   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:48.660141   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:48.660170   58398 cri.go:89] found id: ""
	I0520 11:50:48.660179   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:48.660236   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.665578   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:48.665648   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:48.704366   58398 cri.go:89] found id: ""
	I0520 11:50:48.704398   58398 logs.go:276] 0 containers: []
	W0520 11:50:48.704408   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:48.704418   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:48.704482   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:48.745897   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:48.745925   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:48.745931   58398 cri.go:89] found id: ""
	I0520 11:50:48.745940   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:48.745997   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.750982   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.755064   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:48.755084   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:48.804692   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:48.804720   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:48.862735   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:48.862765   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:48.914854   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:48.914885   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:48.930449   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:48.930473   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:48.980371   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:48.980398   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:49.019847   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:49.019871   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:49.066103   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:49.066131   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:49.102912   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:49.102938   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:49.148380   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:49.148403   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:49.184436   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:49.184466   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:49.658226   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:49.658267   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:49.715674   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:49.715724   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:52.360868   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:52.378573   58398 api_server.go:72] duration metric: took 4m14.818629715s to wait for apiserver process to appear ...
	I0520 11:50:52.378595   58398 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:50:52.378634   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:52.378694   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:52.423033   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:52.423057   58398 cri.go:89] found id: ""
	I0520 11:50:52.423067   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:52.423124   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.427361   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:52.427422   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:52.472357   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:52.472383   58398 cri.go:89] found id: ""
	I0520 11:50:52.472393   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:52.472452   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.477472   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:52.477536   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:52.517721   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:52.517742   58398 cri.go:89] found id: ""
	I0520 11:50:52.517750   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:52.517806   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.522347   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:52.522415   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:52.562720   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:52.562747   58398 cri.go:89] found id: ""
	I0520 11:50:52.562757   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:52.562815   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.567322   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:52.567378   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:52.605752   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:52.605779   58398 cri.go:89] found id: ""
	I0520 11:50:52.605787   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:52.605841   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.609932   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:52.609996   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:52.646704   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:52.646734   58398 cri.go:89] found id: ""
	I0520 11:50:52.646745   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:52.646805   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.651422   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:52.651493   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:52.690880   58398 cri.go:89] found id: ""
	I0520 11:50:52.690909   58398 logs.go:276] 0 containers: []
	W0520 11:50:52.690919   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:52.690927   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:52.690997   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:52.733525   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:52.733558   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:52.733562   58398 cri.go:89] found id: ""
	I0520 11:50:52.733568   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:52.733615   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.738004   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.743086   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:52.743108   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:52.795042   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:52.795078   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:52.809356   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:52.809384   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:52.854486   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:52.854517   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:52.904266   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:52.904289   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:52.941257   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:52.941285   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:52.886142   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.385782   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.237878   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:50:55.238408   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:50:55.238664   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:50:53.057541   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.554909   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:52.978583   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:52.978613   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:53.081246   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:53.081276   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:53.124472   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:53.124509   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:53.167027   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:53.167055   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:53.223091   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:53.223125   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:53.261390   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:53.261417   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:53.680820   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:53.680877   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:56.227910   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:50:56.233856   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I0520 11:50:56.234827   58398 api_server.go:141] control plane version: v1.30.1
	I0520 11:50:56.234852   58398 api_server.go:131] duration metric: took 3.856250847s to wait for apiserver health ...
	I0520 11:50:56.234859   58398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:50:56.234880   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:56.234923   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:56.273520   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:56.273539   58398 cri.go:89] found id: ""
	I0520 11:50:56.273545   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:56.273589   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.278121   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:56.278181   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:56.311934   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:56.311958   58398 cri.go:89] found id: ""
	I0520 11:50:56.311966   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:56.312011   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.315972   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:56.316033   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:56.351237   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:56.351259   58398 cri.go:89] found id: ""
	I0520 11:50:56.351268   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:56.351323   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.355480   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:56.355528   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:56.389768   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:56.389782   58398 cri.go:89] found id: ""
	I0520 11:50:56.389789   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:56.389829   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.393815   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:56.393869   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:56.436557   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:56.436582   58398 cri.go:89] found id: ""
	I0520 11:50:56.436592   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:56.436637   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.440866   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:56.440943   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:56.480688   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:56.480734   58398 cri.go:89] found id: ""
	I0520 11:50:56.480744   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:56.480814   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.485880   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:56.485974   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:56.521965   58398 cri.go:89] found id: ""
	I0520 11:50:56.521992   58398 logs.go:276] 0 containers: []
	W0520 11:50:56.521999   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:56.522005   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:56.522063   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:56.567585   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:56.567611   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:56.567617   58398 cri.go:89] found id: ""
	I0520 11:50:56.567627   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:56.567677   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.572972   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.577459   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:56.577484   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:56.636435   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:56.636467   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:56.675690   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:56.675714   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:56.720035   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:56.720065   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:56.734275   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:56.734302   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:56.837237   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:56.837268   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:56.885482   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:56.885517   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:56.922276   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:56.922302   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:56.971119   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:56.971148   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:57.010383   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:57.010414   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:57.044880   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:57.044909   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:57.097906   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:57.097936   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:57.147600   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:57.147626   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:59.998576   58398 system_pods.go:59] 8 kube-system pods found
	I0520 11:50:59.998601   58398 system_pods.go:61] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running
	I0520 11:50:59.998605   58398 system_pods.go:61] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running
	I0520 11:50:59.998609   58398 system_pods.go:61] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running
	I0520 11:50:59.998613   58398 system_pods.go:61] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running
	I0520 11:50:59.998617   58398 system_pods.go:61] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running
	I0520 11:50:59.998621   58398 system_pods.go:61] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running
	I0520 11:50:59.998627   58398 system_pods.go:61] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:50:59.998631   58398 system_pods.go:61] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running
	I0520 11:50:59.998639   58398 system_pods.go:74] duration metric: took 3.763773626s to wait for pod list to return data ...
	I0520 11:50:59.998645   58398 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:51:00.001599   58398 default_sa.go:45] found service account: "default"
	I0520 11:51:00.001619   58398 default_sa.go:55] duration metric: took 2.967907ms for default service account to be created ...
	I0520 11:51:00.001626   58398 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:51:00.007124   58398 system_pods.go:86] 8 kube-system pods found
	I0520 11:51:00.007151   58398 system_pods.go:89] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running
	I0520 11:51:00.007156   58398 system_pods.go:89] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running
	I0520 11:51:00.007161   58398 system_pods.go:89] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running
	I0520 11:51:00.007168   58398 system_pods.go:89] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running
	I0520 11:51:00.007175   58398 system_pods.go:89] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running
	I0520 11:51:00.007182   58398 system_pods.go:89] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running
	I0520 11:51:00.007194   58398 system_pods.go:89] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:00.007205   58398 system_pods.go:89] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running
	I0520 11:51:00.007215   58398 system_pods.go:126] duration metric: took 5.582307ms to wait for k8s-apps to be running ...
	I0520 11:51:00.007226   58398 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:51:00.007281   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:51:00.023780   58398 system_svc.go:56] duration metric: took 16.545966ms WaitForService to wait for kubelet
	I0520 11:51:00.023806   58398 kubeadm.go:576] duration metric: took 4m22.463864802s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:51:00.023830   58398 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:51:00.026757   58398 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:51:00.026775   58398 node_conditions.go:123] node cpu capacity is 2
	I0520 11:51:00.026788   58398 node_conditions.go:105] duration metric: took 2.951465ms to run NodePressure ...
	I0520 11:51:00.026798   58398 start.go:240] waiting for startup goroutines ...
	I0520 11:51:00.026804   58398 start.go:245] waiting for cluster config update ...
	I0520 11:51:00.026814   58398 start.go:254] writing updated cluster config ...
	I0520 11:51:00.027094   58398 ssh_runner.go:195] Run: rm -f paused
	I0520 11:51:00.078104   58398 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:51:00.079923   58398 out.go:177] * Done! kubectl is now configured to use "embed-certs-274976" cluster and "default" namespace by default
	I0520 11:50:57.885337   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:00.385874   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:00.239814   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:00.240101   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:50:57.554979   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:59.555797   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:02.886892   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:05.384747   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:02.060543   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:04.555457   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:06.556619   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:07.386407   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:09.885284   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:10.240955   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:10.241197   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:09.055313   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:11.056015   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:11.886887   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:13.887204   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:15.888591   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:12.055930   58840 pod_ready.go:81] duration metric: took 4m0.00701193s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	E0520 11:51:12.055967   58840 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:51:12.055978   58840 pod_ready.go:38] duration metric: took 4m5.378750595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:51:12.055998   58840 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:51:12.056034   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:12.056101   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:12.115475   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:12.115503   58840 cri.go:89] found id: ""
	I0520 11:51:12.115513   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:12.115568   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.120850   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:12.120940   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:12.161022   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:12.161055   58840 cri.go:89] found id: ""
	I0520 11:51:12.161062   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:12.161130   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.165525   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:12.165586   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:12.211862   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:12.211890   58840 cri.go:89] found id: ""
	I0520 11:51:12.211899   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:12.211959   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.217037   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:12.217114   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:12.256985   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:12.257010   58840 cri.go:89] found id: ""
	I0520 11:51:12.257018   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:12.257064   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.261765   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:12.261825   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:12.299745   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:12.299773   58840 cri.go:89] found id: ""
	I0520 11:51:12.299781   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:12.299842   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.307681   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:12.307752   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:12.346729   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:12.346754   58840 cri.go:89] found id: ""
	I0520 11:51:12.346763   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:12.346819   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.352034   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:12.352096   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:12.394387   58840 cri.go:89] found id: ""
	I0520 11:51:12.394405   58840 logs.go:276] 0 containers: []
	W0520 11:51:12.394412   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:12.394416   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:12.394460   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:12.434198   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:12.434224   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:12.434229   58840 cri.go:89] found id: ""
	I0520 11:51:12.434238   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:12.434298   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.439424   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.444096   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:12.444113   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:12.501635   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:12.501668   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:12.516529   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:12.516557   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:12.662690   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:12.662723   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:12.724415   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:12.724455   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:12.767417   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:12.767446   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:12.815739   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:12.815775   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:12.860515   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:12.860543   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:12.900348   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:12.900377   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:12.948489   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:12.948518   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:12.996999   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:12.997030   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:13.034866   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:13.034896   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:13.532828   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:13.532875   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:16.083150   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:51:16.106110   58840 api_server.go:72] duration metric: took 4m16.64390235s to wait for apiserver process to appear ...
	I0520 11:51:16.106155   58840 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:51:16.106198   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:16.106262   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:16.148418   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:16.148450   58840 cri.go:89] found id: ""
	I0520 11:51:16.148459   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:16.148527   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.153400   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:16.153456   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:16.196145   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:16.196167   58840 cri.go:89] found id: ""
	I0520 11:51:16.196175   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:16.196231   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.200792   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:16.200857   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:16.244411   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:16.244435   58840 cri.go:89] found id: ""
	I0520 11:51:16.244446   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:16.244497   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.249637   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:16.249707   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:16.288520   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:16.288542   58840 cri.go:89] found id: ""
	I0520 11:51:16.288550   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:16.288613   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.293307   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:16.293377   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:16.336963   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:16.336988   58840 cri.go:89] found id: ""
	I0520 11:51:16.336998   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:16.337053   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.342155   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:16.342236   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:16.390589   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:16.390610   58840 cri.go:89] found id: ""
	I0520 11:51:16.390617   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:16.390672   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.395319   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:16.395367   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:16.444947   58840 cri.go:89] found id: ""
	I0520 11:51:16.444974   58840 logs.go:276] 0 containers: []
	W0520 11:51:16.444985   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:16.445005   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:16.445069   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:16.485389   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:16.485414   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:16.485418   58840 cri.go:89] found id: ""
	I0520 11:51:16.485425   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:16.485481   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.490036   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.494650   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:16.494678   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:16.510820   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:16.510847   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:16.549583   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:16.549616   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:16.593009   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:16.593036   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:16.644523   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:16.644551   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:16.686326   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:16.686355   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:16.753308   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:16.753339   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:16.799568   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:16.799600   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:16.858422   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:16.858454   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:18.385869   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:20.386467   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:16.983556   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:16.983591   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:17.034851   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:17.034885   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:17.078220   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:17.078247   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:17.120946   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:17.120979   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:20.055408   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:51:20.059619   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 200:
	ok
	I0520 11:51:20.060649   58840 api_server.go:141] control plane version: v1.30.1
	I0520 11:51:20.060671   58840 api_server.go:131] duration metric: took 3.954509172s to wait for apiserver health ...
	I0520 11:51:20.060679   58840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:51:20.060698   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:20.060745   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:20.099666   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:20.099687   58840 cri.go:89] found id: ""
	I0520 11:51:20.099695   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:20.099746   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.104235   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:20.104286   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:20.162860   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:20.162881   58840 cri.go:89] found id: ""
	I0520 11:51:20.162887   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:20.162936   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.168215   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:20.168267   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:20.208871   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:20.208900   58840 cri.go:89] found id: ""
	I0520 11:51:20.208909   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:20.208982   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.213637   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:20.213717   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:20.253194   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:20.253219   58840 cri.go:89] found id: ""
	I0520 11:51:20.253230   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:20.253286   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.258190   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:20.258261   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:20.299693   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:20.299713   58840 cri.go:89] found id: ""
	I0520 11:51:20.299720   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:20.299779   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.308104   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:20.308165   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:20.348954   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:20.348977   58840 cri.go:89] found id: ""
	I0520 11:51:20.348985   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:20.349030   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.353738   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:20.353798   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:20.388466   58840 cri.go:89] found id: ""
	I0520 11:51:20.388486   58840 logs.go:276] 0 containers: []
	W0520 11:51:20.388496   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:20.388504   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:20.388564   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:20.430628   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:20.430653   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:20.430659   58840 cri.go:89] found id: ""
	I0520 11:51:20.430668   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:20.430714   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.435241   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.439312   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:20.439332   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:20.476361   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:20.476388   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:20.514317   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:20.514345   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:20.566557   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:20.566604   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:20.582249   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:20.582279   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:20.639530   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:20.639568   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:20.676804   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:20.676837   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:20.713661   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:20.713691   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:20.757100   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:20.757125   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:20.805824   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:20.805859   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:20.921508   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:20.921540   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:20.968752   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:20.968785   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:21.024773   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:21.024802   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:23.915504   58840 system_pods.go:59] 8 kube-system pods found
	I0520 11:51:23.915536   58840 system_pods.go:61] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running
	I0520 11:51:23.915540   58840 system_pods.go:61] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running
	I0520 11:51:23.915544   58840 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running
	I0520 11:51:23.915549   58840 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running
	I0520 11:51:23.915552   58840 system_pods.go:61] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:51:23.915555   58840 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running
	I0520 11:51:23.915562   58840 system_pods.go:61] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:23.915566   58840 system_pods.go:61] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:51:23.915574   58840 system_pods.go:74] duration metric: took 3.854889521s to wait for pod list to return data ...
	I0520 11:51:23.915582   58840 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:51:23.917651   58840 default_sa.go:45] found service account: "default"
	I0520 11:51:23.917672   58840 default_sa.go:55] duration metric: took 2.084872ms for default service account to be created ...
	I0520 11:51:23.917679   58840 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:51:23.922566   58840 system_pods.go:86] 8 kube-system pods found
	I0520 11:51:23.922589   58840 system_pods.go:89] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running
	I0520 11:51:23.922594   58840 system_pods.go:89] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running
	I0520 11:51:23.922599   58840 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running
	I0520 11:51:23.922604   58840 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running
	I0520 11:51:23.922608   58840 system_pods.go:89] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:51:23.922612   58840 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running
	I0520 11:51:23.922618   58840 system_pods.go:89] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:23.922623   58840 system_pods.go:89] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:51:23.922630   58840 system_pods.go:126] duration metric: took 4.946206ms to wait for k8s-apps to be running ...
	I0520 11:51:23.922638   58840 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:51:23.922679   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:51:23.941348   58840 system_svc.go:56] duration metric: took 18.700356ms WaitForService to wait for kubelet
	I0520 11:51:23.941388   58840 kubeadm.go:576] duration metric: took 4m24.479184818s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:51:23.941414   58840 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:51:23.945372   58840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:51:23.945400   58840 node_conditions.go:123] node cpu capacity is 2
	I0520 11:51:23.945414   58840 node_conditions.go:105] duration metric: took 3.993897ms to run NodePressure ...
	I0520 11:51:23.945428   58840 start.go:240] waiting for startup goroutines ...
	I0520 11:51:23.945437   58840 start.go:245] waiting for cluster config update ...
	I0520 11:51:23.945450   58840 start.go:254] writing updated cluster config ...
	I0520 11:51:23.945739   58840 ssh_runner.go:195] Run: rm -f paused
	I0520 11:51:23.997255   58840 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:51:23.999485   58840 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-668445" cluster and "default" namespace by default
	I0520 11:51:22.386549   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:24.885200   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:26.887178   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:29.384900   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:30.242152   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:30.242431   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:31.386214   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:33.386309   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:35.379767   57519 pod_ready.go:81] duration metric: took 4m0.000946428s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" ...
	E0520 11:51:35.379806   57519 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0520 11:51:35.379826   57519 pod_ready.go:38] duration metric: took 4m4.525892516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:51:35.379854   57519 kubeadm.go:591] duration metric: took 4m13.966188007s to restartPrimaryControlPlane
	W0520 11:51:35.379905   57519 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:51:35.379929   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:07.074203   57519 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.694253213s)
	I0520 11:52:07.074270   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:07.089871   57519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:52:07.100708   57519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:07.111260   57519 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:07.111283   57519 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:07.111333   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:07.121864   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:07.121912   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:07.131426   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:07.140637   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:07.140693   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:07.150305   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:07.160398   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:07.160464   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:07.170170   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:07.179684   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:07.179740   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:07.189036   57519 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:07.242553   57519 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 11:52:07.242606   57519 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:07.396088   57519 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:07.396232   57519 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:07.396391   57519 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:07.612882   57519 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:07.614941   57519 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:07.615033   57519 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:07.615124   57519 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:07.615249   57519 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:07.615353   57519 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:07.615465   57519 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:07.615543   57519 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:07.615641   57519 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:07.615726   57519 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:07.615815   57519 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:07.615915   57519 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:07.615970   57519 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:07.616047   57519 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:07.744190   57519 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:08.012140   57519 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 11:52:08.080751   57519 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:08.369179   57519 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:08.507095   57519 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:08.507736   57519 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:08.510271   57519 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:10.244270   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:10.244870   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:10.244909   58316 kubeadm.go:309] 
	I0520 11:52:10.245019   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:52:10.245121   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:52:10.245131   58316 kubeadm.go:309] 
	I0520 11:52:10.245213   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:52:10.245293   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:52:10.245550   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:52:10.245566   58316 kubeadm.go:309] 
	I0520 11:52:10.245819   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:52:10.245900   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:52:10.245977   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:52:10.245987   58316 kubeadm.go:309] 
	I0520 11:52:10.246241   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:52:10.246437   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:52:10.246450   58316 kubeadm.go:309] 
	I0520 11:52:10.246702   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:52:10.246915   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:52:10.247106   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:52:10.247366   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:52:10.247396   58316 kubeadm.go:309] 
	I0520 11:52:10.247656   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:10.247823   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:52:10.248110   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0520 11:52:10.248219   58316 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0520 11:52:10.248277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:10.736634   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:10.753534   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:10.766627   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:10.766653   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:10.766706   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:10.778676   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:10.778728   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:10.791073   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:10.801084   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:10.801142   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:10.811127   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.820319   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:10.820389   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.830024   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:10.839789   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:10.839850   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:10.849670   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:10.930808   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:52:10.930892   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:11.097490   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:11.097648   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:11.097813   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:11.296124   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:08.512322   57519 out.go:204]   - Booting up control plane ...
	I0520 11:52:08.512437   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:08.512530   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:08.512609   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:08.533828   57519 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:08.534843   57519 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:08.534903   57519 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:08.670084   57519 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 11:52:08.670192   57519 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 11:52:09.172058   57519 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.003696ms
	I0520 11:52:09.172181   57519 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 11:52:11.298250   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:11.298369   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:11.298456   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:11.300821   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:11.300961   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:11.301094   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:11.301178   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:11.301283   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:11.301374   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:11.301475   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:11.301589   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:11.301645   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:11.301723   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:11.441179   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:11.617104   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:11.918441   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:12.107514   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:12.132712   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:12.134501   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:12.134577   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:12.307559   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:14.174127   57519 kubeadm.go:309] [api-check] The API server is healthy after 5.002000599s
	I0520 11:52:14.189265   57519 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 11:52:14.707457   57519 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 11:52:14.731273   57519 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 11:52:14.731480   57519 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-814599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 11:52:14.746881   57519 kubeadm.go:309] [bootstrap-token] Using token: 3eyb61.mp2wh3ubkzpzouqm
	I0520 11:52:14.748381   57519 out.go:204]   - Configuring RBAC rules ...
	I0520 11:52:14.748539   57519 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 11:52:14.764426   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 11:52:14.773450   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 11:52:14.778770   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 11:52:14.782819   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 11:52:14.787255   57519 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 11:52:14.900560   57519 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 11:52:15.342284   57519 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 11:52:15.901553   57519 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 11:52:15.904310   57519 kubeadm.go:309] 
	I0520 11:52:15.904380   57519 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 11:52:15.904389   57519 kubeadm.go:309] 
	I0520 11:52:15.904507   57519 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 11:52:15.904545   57519 kubeadm.go:309] 
	I0520 11:52:15.904578   57519 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 11:52:15.904653   57519 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 11:52:15.904713   57519 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 11:52:15.904723   57519 kubeadm.go:309] 
	I0520 11:52:15.904813   57519 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 11:52:15.904824   57519 kubeadm.go:309] 
	I0520 11:52:15.904910   57519 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 11:52:15.904935   57519 kubeadm.go:309] 
	I0520 11:52:15.905012   57519 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 11:52:15.905113   57519 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 11:52:15.905218   57519 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 11:52:15.905239   57519 kubeadm.go:309] 
	I0520 11:52:15.905326   57519 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 11:52:15.905433   57519 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 11:52:15.905443   57519 kubeadm.go:309] 
	I0520 11:52:15.905534   57519 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3eyb61.mp2wh3ubkzpzouqm \
	I0520 11:52:15.905675   57519 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 11:52:15.905725   57519 kubeadm.go:309] 	--control-plane 
	I0520 11:52:15.905738   57519 kubeadm.go:309] 
	I0520 11:52:15.905851   57519 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 11:52:15.905862   57519 kubeadm.go:309] 
	I0520 11:52:15.905989   57519 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3eyb61.mp2wh3ubkzpzouqm \
	I0520 11:52:15.906131   57519 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 11:52:15.907657   57519 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:15.907690   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:52:15.907703   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:52:15.909664   57519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:52:15.911132   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:52:15.927634   57519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:52:15.946135   57519 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:52:15.946282   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:15.946298   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-814599 minikube.k8s.io/updated_at=2024_05_20T11_52_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=no-preload-814599 minikube.k8s.io/primary=true
	I0520 11:52:16.161431   57519 ops.go:34] apiserver oom_adj: -16
	I0520 11:52:16.166400   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:12.309392   58316 out.go:204]   - Booting up control plane ...
	I0520 11:52:12.309529   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:12.321163   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:12.322464   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:12.323504   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:12.326643   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:52:16.666687   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:17.167351   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:17.666800   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:18.167094   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:18.667154   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:19.167261   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:19.666780   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:20.167002   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:20.667078   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:21.167229   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:21.666511   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:22.167275   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:22.666512   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:23.166471   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:23.667393   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:24.167417   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:24.667387   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:25.167046   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:25.666728   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:26.167118   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:26.667450   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:27.166833   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:27.666776   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.167244   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.666741   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.758525   57519 kubeadm.go:1107] duration metric: took 12.81228323s to wait for elevateKubeSystemPrivileges
	W0520 11:52:28.758571   57519 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 11:52:28.758582   57519 kubeadm.go:393] duration metric: took 5m7.403869165s to StartCluster
	I0520 11:52:28.758599   57519 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:28.758679   57519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:52:28.760488   57519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:28.760723   57519 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:52:28.762410   57519 out.go:177] * Verifying Kubernetes components...
	I0520 11:52:28.760783   57519 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:52:28.760966   57519 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:52:28.763493   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:52:28.763499   57519 addons.go:69] Setting storage-provisioner=true in profile "no-preload-814599"
	I0520 11:52:28.763527   57519 addons.go:234] Setting addon storage-provisioner=true in "no-preload-814599"
	I0520 11:52:28.763528   57519 addons.go:69] Setting metrics-server=true in profile "no-preload-814599"
	W0520 11:52:28.763537   57519 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:52:28.763527   57519 addons.go:69] Setting default-storageclass=true in profile "no-preload-814599"
	I0520 11:52:28.763558   57519 addons.go:234] Setting addon metrics-server=true in "no-preload-814599"
	I0520 11:52:28.763562   57519 host.go:66] Checking if "no-preload-814599" exists ...
	W0520 11:52:28.763568   57519 addons.go:243] addon metrics-server should already be in state true
	I0520 11:52:28.763582   57519 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-814599"
	I0520 11:52:28.763589   57519 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:52:28.763934   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.763952   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.763984   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.764022   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.764047   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.764081   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.778949   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0520 11:52:28.778995   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0520 11:52:28.779481   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.779522   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.780023   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.780053   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.780159   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.780180   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.780498   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.780532   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.780680   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.780720   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0520 11:52:28.781084   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.781094   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.781127   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.781605   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.781627   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.781951   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.782608   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.782651   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.784981   57519 addons.go:234] Setting addon default-storageclass=true in "no-preload-814599"
	W0520 11:52:28.785010   57519 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:52:28.785041   57519 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:52:28.785399   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.785423   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.797463   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0520 11:52:28.797488   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I0520 11:52:28.797958   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.798050   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.798571   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.798589   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.798634   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.798647   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.799040   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.799043   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.799206   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.799250   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.802002   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.804188   57519 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:52:28.802940   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.805408   57519 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:52:28.804469   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0520 11:52:28.805363   57519 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:52:28.806586   57519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:52:28.806592   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:52:28.806601   57519 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:52:28.806607   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.806613   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.806965   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.807364   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.807375   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.807765   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.808210   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.808226   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.810837   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.810866   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811278   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.811300   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811477   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.811716   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.811753   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.811769   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811856   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.812027   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.812308   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.812494   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.812643   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.812795   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.824082   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I0520 11:52:28.824580   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.825117   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.825144   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.825492   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.825709   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.827343   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.827545   57519 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:52:28.827561   57519 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:52:28.827574   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.830623   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.831261   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.831287   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.831485   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.831625   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.831783   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.831912   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.984120   57519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:52:29.006910   57519 node_ready.go:35] waiting up to 6m0s for node "no-preload-814599" to be "Ready" ...
	I0520 11:52:29.016171   57519 node_ready.go:49] node "no-preload-814599" has status "Ready":"True"
	I0520 11:52:29.016197   57519 node_ready.go:38] duration metric: took 9.258398ms for node "no-preload-814599" to be "Ready" ...
	I0520 11:52:29.016207   57519 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:52:29.024085   57519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:29.092559   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:52:29.094377   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:52:29.094395   57519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:52:29.112741   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:52:29.133783   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:52:29.133804   57519 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:52:29.202727   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:52:29.202756   57519 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:52:29.347632   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:52:30.143576   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050979242s)
	I0520 11:52:30.143633   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.143645   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.143652   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.030877915s)
	I0520 11:52:30.143694   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.143706   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.143986   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.143960   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.144014   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144026   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.144037   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.144040   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144053   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.144061   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.144068   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.144045   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.144326   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144345   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.146044   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.146057   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.146068   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.182464   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.182492   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.182802   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.182817   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.557632   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.209948791s)
	I0520 11:52:30.557698   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.557715   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.557972   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.557992   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.557996   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.558011   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.558022   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.558252   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.558267   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.558277   57519 addons.go:470] Verifying addon metrics-server=true in "no-preload-814599"
	I0520 11:52:30.558302   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.560471   57519 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0520 11:52:30.561889   57519 addons.go:505] duration metric: took 1.801104362s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0520 11:52:31.030297   57519 pod_ready.go:102] pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace has status "Ready":"False"
	I0520 11:52:31.538984   57519 pod_ready.go:92] pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.539002   57519 pod_ready.go:81] duration metric: took 2.514888376s for pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.539011   57519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.544069   57519 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.544089   57519 pod_ready.go:81] duration metric: took 5.072004ms for pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.544097   57519 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.549299   57519 pod_ready.go:92] pod "etcd-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.549315   57519 pod_ready.go:81] duration metric: took 5.212738ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.549323   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.555210   57519 pod_ready.go:92] pod "kube-apiserver-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.555229   57519 pod_ready.go:81] duration metric: took 5.899536ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.555263   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.560438   57519 pod_ready.go:92] pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.560454   57519 pod_ready.go:81] duration metric: took 5.183454ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.560463   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wkfkc" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.928261   57519 pod_ready.go:92] pod "kube-proxy-wkfkc" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.928280   57519 pod_ready.go:81] duration metric: took 367.811223ms for pod "kube-proxy-wkfkc" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.928290   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:32.328564   57519 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:32.328589   57519 pod_ready.go:81] duration metric: took 400.292676ms for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:32.328598   57519 pod_ready.go:38] duration metric: took 3.31238177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:52:32.328612   57519 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:52:32.328672   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:52:32.346546   57519 api_server.go:72] duration metric: took 3.585793393s to wait for apiserver process to appear ...
	I0520 11:52:32.346575   57519 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:52:32.346598   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:52:32.351109   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:52:32.352044   57519 api_server.go:141] control plane version: v1.30.1
	I0520 11:52:32.352070   57519 api_server.go:131] duration metric: took 5.485989ms to wait for apiserver health ...
	I0520 11:52:32.352080   57519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:52:32.532017   57519 system_pods.go:59] 9 kube-system pods found
	I0520 11:52:32.532048   57519 system_pods.go:61] "coredns-7db6d8ff4d-8qdbk" [98bbe29a-5bd1-4568-ac29-5bf030c38f79] Running
	I0520 11:52:32.532053   57519 system_pods.go:61] "coredns-7db6d8ff4d-kbz7d" [d36efcc0-4f91-4c5d-8781-0bb5f22187ab] Running
	I0520 11:52:32.532056   57519 system_pods.go:61] "etcd-no-preload-814599" [d0685317-5464-4eb1-a81f-eca1a644f86a] Running
	I0520 11:52:32.532060   57519 system_pods.go:61] "kube-apiserver-no-preload-814599" [4a6f40fe-3c8f-4afc-a731-ccb80c5840ed] Running
	I0520 11:52:32.532064   57519 system_pods.go:61] "kube-controller-manager-no-preload-814599" [b6f3ac72-bd6f-421d-85f6-905a362be527] Running
	I0520 11:52:32.532067   57519 system_pods.go:61] "kube-proxy-wkfkc" [679b7143-7bff-47f3-ab69-cd87eec68013] Running
	I0520 11:52:32.532071   57519 system_pods.go:61] "kube-scheduler-no-preload-814599" [0e7d1cb0-2e44-4ae5-b7fd-e68f88f7fc04] Running
	I0520 11:52:32.532077   57519 system_pods.go:61] "metrics-server-569cc877fc-lggvs" [72eeb6ee-27ce-4485-b364-5c9b87283d29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:52:32.532081   57519 system_pods.go:61] "storage-provisioner" [1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b] Running
	I0520 11:52:32.532094   57519 system_pods.go:74] duration metric: took 180.008393ms to wait for pod list to return data ...
	I0520 11:52:32.532103   57519 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:52:32.729233   57519 default_sa.go:45] found service account: "default"
	I0520 11:52:32.729264   57519 default_sa.go:55] duration metric: took 197.154237ms for default service account to be created ...
	I0520 11:52:32.729278   57519 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:52:32.932686   57519 system_pods.go:86] 9 kube-system pods found
	I0520 11:52:32.932717   57519 system_pods.go:89] "coredns-7db6d8ff4d-8qdbk" [98bbe29a-5bd1-4568-ac29-5bf030c38f79] Running
	I0520 11:52:32.932764   57519 system_pods.go:89] "coredns-7db6d8ff4d-kbz7d" [d36efcc0-4f91-4c5d-8781-0bb5f22187ab] Running
	I0520 11:52:32.932777   57519 system_pods.go:89] "etcd-no-preload-814599" [d0685317-5464-4eb1-a81f-eca1a644f86a] Running
	I0520 11:52:32.932786   57519 system_pods.go:89] "kube-apiserver-no-preload-814599" [4a6f40fe-3c8f-4afc-a731-ccb80c5840ed] Running
	I0520 11:52:32.932796   57519 system_pods.go:89] "kube-controller-manager-no-preload-814599" [b6f3ac72-bd6f-421d-85f6-905a362be527] Running
	I0520 11:52:32.932805   57519 system_pods.go:89] "kube-proxy-wkfkc" [679b7143-7bff-47f3-ab69-cd87eec68013] Running
	I0520 11:52:32.932812   57519 system_pods.go:89] "kube-scheduler-no-preload-814599" [0e7d1cb0-2e44-4ae5-b7fd-e68f88f7fc04] Running
	I0520 11:52:32.932824   57519 system_pods.go:89] "metrics-server-569cc877fc-lggvs" [72eeb6ee-27ce-4485-b364-5c9b87283d29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:52:32.932835   57519 system_pods.go:89] "storage-provisioner" [1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b] Running
	I0520 11:52:32.932854   57519 system_pods.go:126] duration metric: took 203.569018ms to wait for k8s-apps to be running ...
	I0520 11:52:32.932867   57519 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:52:32.932945   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:32.948983   57519 system_svc.go:56] duration metric: took 16.10841ms WaitForService to wait for kubelet
	I0520 11:52:32.949014   57519 kubeadm.go:576] duration metric: took 4.188266242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:52:32.949035   57519 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:52:33.128478   57519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:52:33.128503   57519 node_conditions.go:123] node cpu capacity is 2
	I0520 11:52:33.128514   57519 node_conditions.go:105] duration metric: took 179.473758ms to run NodePressure ...
	I0520 11:52:33.128527   57519 start.go:240] waiting for startup goroutines ...
	I0520 11:52:33.128536   57519 start.go:245] waiting for cluster config update ...
	I0520 11:52:33.128548   57519 start.go:254] writing updated cluster config ...
	I0520 11:52:33.128860   57519 ssh_runner.go:195] Run: rm -f paused
	I0520 11:52:33.177360   57519 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:52:33.179409   57519 out.go:177] * Done! kubectl is now configured to use "no-preload-814599" cluster and "default" namespace by default
	I0520 11:52:52.328753   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:52:52.329026   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:52.329268   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:57.329722   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:57.329960   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:07.330334   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:07.330605   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:27.331241   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:27.331509   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330498   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:54:07.330756   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330768   58316 kubeadm.go:309] 
	I0520 11:54:07.330813   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:54:07.330868   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:54:07.330878   58316 kubeadm.go:309] 
	I0520 11:54:07.330924   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:54:07.330980   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:54:07.331142   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:54:07.331178   58316 kubeadm.go:309] 
	I0520 11:54:07.331298   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:54:07.331354   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:54:07.331409   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:54:07.331419   58316 kubeadm.go:309] 
	I0520 11:54:07.331564   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:54:07.331675   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:54:07.331685   58316 kubeadm.go:309] 
	I0520 11:54:07.331842   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:54:07.331971   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:54:07.332096   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:54:07.332190   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:54:07.332204   58316 kubeadm.go:309] 
	I0520 11:54:07.332486   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:54:07.332595   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:54:07.332685   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 11:54:07.332755   58316 kubeadm.go:393] duration metric: took 7m58.459127604s to StartCluster
	I0520 11:54:07.332796   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:54:07.332870   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:54:07.378602   58316 cri.go:89] found id: ""
	I0520 11:54:07.378631   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.378641   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:54:07.378648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:54:07.378714   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:54:07.415695   58316 cri.go:89] found id: ""
	I0520 11:54:07.415724   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.415732   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:54:07.415739   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:54:07.415802   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:54:07.456681   58316 cri.go:89] found id: ""
	I0520 11:54:07.456706   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.456717   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:54:07.456726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:54:07.456781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:54:07.495834   58316 cri.go:89] found id: ""
	I0520 11:54:07.495870   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.495881   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:54:07.495889   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:54:07.495949   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:54:07.533674   58316 cri.go:89] found id: ""
	I0520 11:54:07.533698   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.533705   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:54:07.533710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:54:07.533759   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:54:07.571118   58316 cri.go:89] found id: ""
	I0520 11:54:07.571141   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.571148   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:54:07.571155   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:54:07.571214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:54:07.609066   58316 cri.go:89] found id: ""
	I0520 11:54:07.609091   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.609105   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:54:07.609114   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:54:07.609176   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:54:07.653029   58316 cri.go:89] found id: ""
	I0520 11:54:07.653057   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.653068   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:54:07.653080   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:54:07.653095   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:54:07.707497   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:54:07.707533   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:54:07.728740   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:54:07.728772   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:54:07.809062   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:54:07.809088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:54:07.809104   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:54:07.911717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:54:07.911762   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0520 11:54:07.952747   58316 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0520 11:54:07.952792   58316 out.go:239] * 
	W0520 11:54:07.952855   58316 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.952881   58316 out.go:239] * 
	W0520 11:54:07.954044   58316 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:54:07.957368   58316 out.go:177] 
	W0520 11:54:07.958433   58316 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.958491   58316 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0520 11:54:07.958518   58316 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0520 11:54:07.959662   58316 out.go:177] 
	
	
	==> CRI-O <==
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.781741698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206049781721795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7d7d6f8-3247-42f0-b6e0-7f52f9c76f5b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.782457308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dff0c64f-54d9-4097-912c-f03ac8218bef name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.782521164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dff0c64f-54d9-4097-912c-f03ac8218bef name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.782559186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dff0c64f-54d9-4097-912c-f03ac8218bef name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.816462738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=edc3e194-d9b8-4342-8af7-0f289a590bf0 name=/runtime.v1.RuntimeService/Version
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.816547868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=edc3e194-d9b8-4342-8af7-0f289a590bf0 name=/runtime.v1.RuntimeService/Version
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.817555784Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=63c5494e-f404-4035-9296-39d3096f808e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.817953827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206049817934496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63c5494e-f404-4035-9296-39d3096f808e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.818439708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0abbc565-70a6-4068-b238-3faedfb4e91c name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.818493891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0abbc565-70a6-4068-b238-3faedfb4e91c name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.818525698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0abbc565-70a6-4068-b238-3faedfb4e91c name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.850495910Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76352481-2ad2-44b8-a31e-a936421beb85 name=/runtime.v1.RuntimeService/Version
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.850616593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76352481-2ad2-44b8-a31e-a936421beb85 name=/runtime.v1.RuntimeService/Version
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.853684078Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0f782d4-886e-481a-844b-0ff73385e299 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.854050226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206049854024523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0f782d4-886e-481a-844b-0ff73385e299 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.854556679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03dfdf1e-f5e5-4f88-8f3d-896bbfb49653 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.854604478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03dfdf1e-f5e5-4f88-8f3d-896bbfb49653 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.854632908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=03dfdf1e-f5e5-4f88-8f3d-896bbfb49653 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.887542916Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09d3b815-9f1b-4fb9-ad5f-a6ab0767d64a name=/runtime.v1.RuntimeService/Version
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.887621076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09d3b815-9f1b-4fb9-ad5f-a6ab0767d64a name=/runtime.v1.RuntimeService/Version
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.888782713Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d84a2524-625a-4fec-880c-1a77ead4bba2 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.889292156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206049889259094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d84a2524-625a-4fec-880c-1a77ead4bba2 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.889904756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec6888a3-0e53-449c-ba36-6218f1dc60d5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.889993760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec6888a3-0e53-449c-ba36-6218f1dc60d5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 11:54:09 old-k8s-version-210420 crio[654]: time="2024-05-20 11:54:09.890034828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ec6888a3-0e53-449c-ba36-6218f1dc60d5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May20 11:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051778] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041197] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.512371] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.398190] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.611528] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 11:46] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.055467] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069914] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.180624] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.154603] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.255744] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.419712] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.061046] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.817677] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[ +12.205049] kauditd_printk_skb: 46 callbacks suppressed
	[May20 11:50] systemd-fstab-generator[5032]: Ignoring "noauto" option for root device
	[May20 11:52] systemd-fstab-generator[5321]: Ignoring "noauto" option for root device
	[  +0.070587] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 11:54:10 up 8 min,  0 users,  load average: 0.23, 0.19, 0.09
	Linux old-k8s-version-210420 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]:         /usr/local/go/src/net/lookup.go:299 +0x685
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc000bade00, 0x48ab5d6, 0x3, 0xc000b47ec0, 0x24, 0x0, 0x0, 0x0, ...)
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000bade00, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b47ec0, 0x24, 0x0, ...)
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]:         /usr/local/go/src/net/dial.go:221 +0x47d
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]: net.(*Dialer).DialContext(0xc00027e1e0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b47ec0, 0x24, 0x0, 0x0, 0x0, ...)
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]:         /usr/local/go/src/net/dial.go:403 +0x22b
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0008d4280, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b47ec0, 0x24, 0x60, 0x7ff521581a70, 0x118, ...)
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]: net/http.(*Transport).dial(0xc0009d8000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000b47ec0, 0x24, 0x0, 0x0, 0x0, ...)
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]: net/http.(*Transport).dialConn(0xc0009d8000, 0x4f7fe00, 0xc000052030, 0x0, 0xc000ba5200, 0x5, 0xc000b47ec0, 0x24, 0x0, 0xc000baaa20, ...)
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]: net/http.(*Transport).dialConnFor(0xc0009d8000, 0xc000b3a580)
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]: created by net/http.(*Transport).queueForDial
	May 20 11:54:06 old-k8s-version-210420 kubelet[5504]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	May 20 11:54:07 old-k8s-version-210420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	May 20 11:54:07 old-k8s-version-210420 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 20 11:54:07 old-k8s-version-210420 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 20 11:54:07 old-k8s-version-210420 kubelet[5545]: I0520 11:54:07.719803    5545 server.go:416] Version: v1.20.0
	May 20 11:54:07 old-k8s-version-210420 kubelet[5545]: I0520 11:54:07.720094    5545 server.go:837] Client rotation is on, will bootstrap in background
	May 20 11:54:07 old-k8s-version-210420 kubelet[5545]: I0520 11:54:07.722216    5545 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 20 11:54:07 old-k8s-version-210420 kubelet[5545]: I0520 11:54:07.723946    5545 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	May 20 11:54:07 old-k8s-version-210420 kubelet[5545]: W0520 11:54:07.724103    5545 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210420 -n old-k8s-version-210420
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 2 (235.301766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-210420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (699.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445: exit status 3 (3.167717787s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:43:52.685306   58714 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.221:22: connect: no route to host
	E0520 11:43:52.685327   58714 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.221:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-668445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-668445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153428701s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.221:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-668445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445: exit status 3 (3.062307774s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:44:01.901293   58794 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.221:22: connect: no route to host
	E0520 11:44:01.901316   58794 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.221:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-668445" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-274976 -n embed-certs-274976
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-05-20 12:00:00.60667798 +0000 UTC m=+5970.318560813
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274976 -n embed-certs-274976
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-274976 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-274976 logs -n 25: (2.13443899s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-854547                           | kubernetes-upgrade-854547    | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-814599             | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-120159                                        | pause-120159                 | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:39 UTC |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-793579 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | disable-driver-mounts-793579                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:41 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274976            | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-814599                  | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210420        | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-668445  | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC | 20 May 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274976                 | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210420             | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-668445       | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:44 UTC | 20 May 24 11:51 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:44:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:44:01.940145   58840 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:44:01.940398   58840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:44:01.940409   58840 out.go:304] Setting ErrFile to fd 2...
	I0520 11:44:01.940415   58840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:44:01.940633   58840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:44:01.941201   58840 out.go:298] Setting JSON to false
	I0520 11:44:01.942056   58840 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5185,"bootTime":1716200257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:44:01.942118   58840 start.go:139] virtualization: kvm guest
	I0520 11:44:01.944331   58840 out.go:177] * [default-k8s-diff-port-668445] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:44:01.945948   58840 notify.go:220] Checking for updates...
	I0520 11:44:01.945956   58840 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:44:01.947257   58840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:44:01.948525   58840 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:44:01.949881   58840 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:44:01.951156   58840 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:44:01.952736   58840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:44:01.954565   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:44:01.954932   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:44:01.954979   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:44:01.969623   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0520 11:44:01.969984   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:44:01.970444   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:44:01.970489   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:44:01.970816   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:44:01.971013   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:44:01.971212   58840 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:44:01.971493   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:44:01.971540   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:44:01.985556   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0520 11:44:01.985885   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:44:01.986365   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:44:01.986388   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:44:01.986707   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:44:01.986883   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:44:02.021778   58840 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 11:44:02.022944   58840 start.go:297] selected driver: kvm2
	I0520 11:44:02.022954   58840 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:44:02.023042   58840 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:44:02.023717   58840 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:44:02.023796   58840 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:44:02.038516   58840 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:44:02.038900   58840 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:44:02.038931   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:44:02.038942   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:44:02.038989   58840 start.go:340] cluster config:
	{Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:44:02.039097   58840 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:44:02.041645   58840 out.go:177] * Starting "default-k8s-diff-port-668445" primary control-plane node in "default-k8s-diff-port-668445" cluster
	I0520 11:44:02.042785   58840 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:44:02.042826   58840 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 11:44:02.042839   58840 cache.go:56] Caching tarball of preloaded images
	I0520 11:44:02.042917   58840 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:44:02.042929   58840 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 11:44:02.043041   58840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:44:02.043267   58840 start.go:360] acquireMachinesLock for default-k8s-diff-port-668445: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:44:06.605177   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:09.677144   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:15.757188   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:18.829221   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:24.909165   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:27.981180   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:34.061180   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:37.133203   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:43.213225   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:46.285226   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:52.365216   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:55.437121   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:01.517184   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:04.589206   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:10.669220   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:13.741264   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:19.821209   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:22.893127   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:28.973155   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:32.045217   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:41.129769   58316 start.go:364] duration metric: took 3m9.329208681s to acquireMachinesLock for "old-k8s-version-210420"
	I0520 11:45:41.129841   58316 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:45:41.129847   58316 fix.go:54] fixHost starting: 
	I0520 11:45:41.130178   58316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:45:41.130209   58316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:45:41.145527   58316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0520 11:45:41.145974   58316 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:45:41.146470   58316 main.go:141] libmachine: Using API Version  1
	I0520 11:45:41.146491   58316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:45:41.146803   58316 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:45:41.146962   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:41.147096   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetState
	I0520 11:45:41.148653   58316 fix.go:112] recreateIfNeeded on old-k8s-version-210420: state=Stopped err=<nil>
	I0520 11:45:41.148678   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	W0520 11:45:41.148835   58316 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:45:41.150515   58316 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210420" ...
	I0520 11:45:38.125195   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:41.127376   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:41.127411   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:45:41.127706   57519 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:45:41.127739   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:45:41.127954   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:45:41.129638   57519 machine.go:97] duration metric: took 4m44.663213682s to provisionDockerMachine
	I0520 11:45:41.129681   57519 fix.go:56] duration metric: took 4m44.685919857s for fixHost
	I0520 11:45:41.129692   57519 start.go:83] releasing machines lock for "no-preload-814599", held for 4m44.68594997s
	W0520 11:45:41.129717   57519 start.go:713] error starting host: provision: host is not running
	W0520 11:45:41.129807   57519 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0520 11:45:41.129817   57519 start.go:728] Will try again in 5 seconds ...
	I0520 11:45:41.151697   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .Start
	I0520 11:45:41.151882   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring networks are active...
	I0520 11:45:41.152702   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network default is active
	I0520 11:45:41.153047   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network mk-old-k8s-version-210420 is active
	I0520 11:45:41.153452   58316 main.go:141] libmachine: (old-k8s-version-210420) Getting domain xml...
	I0520 11:45:41.154219   58316 main.go:141] libmachine: (old-k8s-version-210420) Creating domain...
	I0520 11:45:46.134344   57519 start.go:360] acquireMachinesLock for no-preload-814599: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:45:42.344294   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting to get IP...
	I0520 11:45:42.345102   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.345496   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.345567   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.345490   59217 retry.go:31] will retry after 272.13676ms: waiting for machine to come up
	I0520 11:45:42.619253   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.619676   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.619706   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.619633   59217 retry.go:31] will retry after 374.178023ms: waiting for machine to come up
	I0520 11:45:42.995376   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.995793   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.995819   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.995748   59217 retry.go:31] will retry after 462.060487ms: waiting for machine to come up
	I0520 11:45:43.459411   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.459824   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.459860   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.459791   59217 retry.go:31] will retry after 479.382552ms: waiting for machine to come up
	I0520 11:45:43.940363   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.940850   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.940873   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.940802   59217 retry.go:31] will retry after 633.680965ms: waiting for machine to come up
	I0520 11:45:44.575531   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:44.575957   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:44.575985   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:44.575915   59217 retry.go:31] will retry after 755.049677ms: waiting for machine to come up
	I0520 11:45:45.332864   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:45.333335   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:45.333368   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:45.333292   59217 retry.go:31] will retry after 757.292563ms: waiting for machine to come up
	I0520 11:45:46.092128   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:46.092583   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:46.092605   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:46.092544   59217 retry.go:31] will retry after 1.150182695s: waiting for machine to come up
	I0520 11:45:47.244071   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:47.244526   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:47.244588   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:47.244474   59217 retry.go:31] will retry after 1.316447926s: waiting for machine to come up
	I0520 11:45:48.562914   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:48.563296   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:48.563326   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:48.563240   59217 retry.go:31] will retry after 1.615920757s: waiting for machine to come up
	I0520 11:45:50.180999   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:50.181361   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:50.181395   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:50.181336   59217 retry.go:31] will retry after 2.441112889s: waiting for machine to come up
	I0520 11:45:52.624374   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:52.624739   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:52.624771   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:52.624722   59217 retry.go:31] will retry after 2.505225423s: waiting for machine to come up
	I0520 11:45:55.133339   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:55.133626   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:55.133655   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:55.133596   59217 retry.go:31] will retry after 4.197262735s: waiting for machine to come up
	I0520 11:46:00.745820   58398 start.go:364] duration metric: took 3m22.67445444s to acquireMachinesLock for "embed-certs-274976"
	I0520 11:46:00.745877   58398 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:00.745887   58398 fix.go:54] fixHost starting: 
	I0520 11:46:00.746287   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:00.746320   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:00.762678   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45595
	I0520 11:46:00.763132   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:00.763654   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:00.763681   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:00.763991   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:00.764154   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:00.764308   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:00.765826   58398 fix.go:112] recreateIfNeeded on embed-certs-274976: state=Stopped err=<nil>
	I0520 11:46:00.765861   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	W0520 11:46:00.766017   58398 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:00.768362   58398 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274976" ...
	I0520 11:45:59.334216   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334678   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has current primary IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334700   58316 main.go:141] libmachine: (old-k8s-version-210420) Found IP for machine: 192.168.83.100
	I0520 11:45:59.334712   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserving static IP address...
	I0520 11:45:59.335134   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.335166   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | skip adding static IP to network mk-old-k8s-version-210420 - found existing host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"}
	I0520 11:45:59.335180   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserved static IP address: 192.168.83.100
	I0520 11:45:59.335197   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting for SSH to be available...
	I0520 11:45:59.335212   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Getting to WaitForSSH function...
	I0520 11:45:59.337206   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337476   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.337501   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337634   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH client type: external
	I0520 11:45:59.337681   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa (-rw-------)
	I0520 11:45:59.337716   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:45:59.337732   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | About to run SSH command:
	I0520 11:45:59.337743   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | exit 0
	I0520 11:45:59.460960   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | SSH cmd err, output: <nil>: 
	I0520 11:45:59.461347   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:45:59.461978   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.464416   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.464734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.464756   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.465018   58316 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:45:59.465183   58316 machine.go:94] provisionDockerMachine start ...
	I0520 11:45:59.465201   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:59.465416   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.467457   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467751   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.467782   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467892   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.468050   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468204   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468330   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.468474   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.468655   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.468665   58316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:45:59.573237   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:45:59.573264   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573558   58316 buildroot.go:166] provisioning hostname "old-k8s-version-210420"
	I0520 11:45:59.573586   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.576245   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.576770   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.576778   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576957   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577122   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577267   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.577413   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.577620   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.577639   58316 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210420 && echo "old-k8s-version-210420" | sudo tee /etc/hostname
	I0520 11:45:59.695529   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210420
	
	I0520 11:45:59.695573   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.698545   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698808   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.698835   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698964   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.699144   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699331   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699427   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.699594   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.699744   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.699767   58316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:45:59.814736   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:59.814765   58316 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:45:59.814820   58316 buildroot.go:174] setting up certificates
	I0520 11:45:59.814832   58316 provision.go:84] configureAuth start
	I0520 11:45:59.814841   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.815158   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.817591   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.817917   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.817946   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.818103   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.820205   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820474   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.820499   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820625   58316 provision.go:143] copyHostCerts
	I0520 11:45:59.820681   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:45:59.820698   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:45:59.820782   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:45:59.820887   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:45:59.820896   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:45:59.820921   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:45:59.821006   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:45:59.821015   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:45:59.821038   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:45:59.821092   58316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210420 san=[127.0.0.1 192.168.83.100 localhost minikube old-k8s-version-210420]
	I0520 11:46:00.093158   58316 provision.go:177] copyRemoteCerts
	I0520 11:46:00.093218   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:00.093251   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.095725   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096033   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.096055   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096180   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.096371   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.096549   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.096693   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.179313   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:00.203235   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 11:46:00.226670   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:00.250832   58316 provision.go:87] duration metric: took 435.987724ms to configureAuth
	I0520 11:46:00.250868   58316 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:00.251054   58316 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:46:00.251183   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.253679   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.253986   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.254020   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.254118   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.254305   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254489   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254626   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.254777   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.255000   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.255023   58316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:00.512174   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:00.512207   58316 machine.go:97] duration metric: took 1.047011014s to provisionDockerMachine
	I0520 11:46:00.512222   58316 start.go:293] postStartSetup for "old-k8s-version-210420" (driver="kvm2")
	I0520 11:46:00.512236   58316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:00.512259   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.512546   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:00.512577   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.515025   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515358   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.515386   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515512   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.515696   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.515879   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.516004   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.599956   58316 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:00.604374   58316 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:00.604399   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:00.604486   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:00.604575   58316 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:00.604704   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:00.614371   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:00.638093   58316 start.go:296] duration metric: took 125.858293ms for postStartSetup
	I0520 11:46:00.638127   58316 fix.go:56] duration metric: took 19.508279908s for fixHost
	I0520 11:46:00.638151   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.640770   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641148   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.641192   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641352   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.641539   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641702   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641844   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.641985   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.642192   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.642207   58316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:00.745686   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205560.721975623
	
	I0520 11:46:00.745708   58316 fix.go:216] guest clock: 1716205560.721975623
	I0520 11:46:00.745716   58316 fix.go:229] Guest: 2024-05-20 11:46:00.721975623 +0000 UTC Remote: 2024-05-20 11:46:00.638130573 +0000 UTC m=+208.973304879 (delta=83.84505ms)
	I0520 11:46:00.745736   58316 fix.go:200] guest clock delta is within tolerance: 83.84505ms
	I0520 11:46:00.745740   58316 start.go:83] releasing machines lock for "old-k8s-version-210420", held for 19.615924008s
	I0520 11:46:00.745762   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.746032   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:00.748741   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749123   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.749146   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749288   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749932   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.750021   58316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:00.750070   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.750200   58316 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:00.750225   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.752648   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.752923   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753019   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753045   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753156   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753244   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753273   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753343   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753420   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753519   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753582   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753650   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.753744   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753879   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	W0520 11:46:00.839633   58316 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:00.839741   58316 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:00.861801   58316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:01.011737   58316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:01.018898   58316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:01.018964   58316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:01.035600   58316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:01.035622   58316 start.go:494] detecting cgroup driver to use...
	I0520 11:46:01.035698   58316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:01.051692   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:01.065994   58316 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:01.066083   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:01.079249   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:01.093862   58316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:01.206394   58316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:01.362764   58316 docker.go:233] disabling docker service ...
	I0520 11:46:01.362842   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:01.378012   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:01.393231   58316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:01.535932   58316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:01.677217   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:01.691602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:01.716408   58316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 11:46:01.716457   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.729460   58316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:01.729541   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.740455   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.751846   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.763174   58316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:01.775353   58316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:01.786021   58316 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:01.786088   58316 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:01.801529   58316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:01.812523   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:01.927496   58316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:02.084059   58316 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:02.084137   58316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:02.089878   58316 start.go:562] Will wait 60s for crictl version
	I0520 11:46:02.089925   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:02.094050   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:02.134859   58316 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:02.134936   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.166705   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.196374   58316 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 11:46:00.769810   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Start
	I0520 11:46:00.769988   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring networks are active...
	I0520 11:46:00.770615   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring network default is active
	I0520 11:46:00.770928   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring network mk-embed-certs-274976 is active
	I0520 11:46:00.771273   58398 main.go:141] libmachine: (embed-certs-274976) Getting domain xml...
	I0520 11:46:00.771977   58398 main.go:141] libmachine: (embed-certs-274976) Creating domain...
	I0520 11:46:02.023816   58398 main.go:141] libmachine: (embed-certs-274976) Waiting to get IP...
	I0520 11:46:02.024616   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.025031   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.025111   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.025009   59335 retry.go:31] will retry after 218.274206ms: waiting for machine to come up
	I0520 11:46:02.244661   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.245135   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.245163   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.245110   59335 retry.go:31] will retry after 359.160276ms: waiting for machine to come up
	I0520 11:46:02.606589   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.607106   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.607129   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.607044   59335 retry.go:31] will retry after 462.479953ms: waiting for machine to come up
	I0520 11:46:02.197655   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:02.200774   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201186   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:02.201211   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201385   58316 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:02.205589   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:02.217983   58316 kubeadm.go:877] updating cluster {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:02.218091   58316 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:46:02.218135   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:02.265907   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:02.265970   58316 ssh_runner.go:195] Run: which lz4
	I0520 11:46:02.271466   58316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:02.276993   58316 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:02.277017   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 11:46:04.082571   58316 crio.go:462] duration metric: took 1.811145307s to copy over tarball
	I0520 11:46:04.082648   58316 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:03.071686   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:03.072242   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:03.072271   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:03.072198   59335 retry.go:31] will retry after 531.663033ms: waiting for machine to come up
	I0520 11:46:03.606121   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:03.606629   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:03.606653   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:03.606581   59335 retry.go:31] will retry after 690.866851ms: waiting for machine to come up
	I0520 11:46:04.299326   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:04.299799   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:04.299829   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:04.299750   59335 retry.go:31] will retry after 928.456543ms: waiting for machine to come up
	I0520 11:46:05.230115   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:05.230568   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:05.230586   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:05.230539   59335 retry.go:31] will retry after 761.597702ms: waiting for machine to come up
	I0520 11:46:05.993511   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:05.993897   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:05.993930   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:05.993854   59335 retry.go:31] will retry after 1.405329972s: waiting for machine to come up
	I0520 11:46:07.401706   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:07.402246   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:07.402312   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:07.402223   59335 retry.go:31] will retry after 1.717177668s: waiting for machine to come up
	I0520 11:46:06.952510   58316 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.869830998s)
	I0520 11:46:06.952533   58316 crio.go:469] duration metric: took 2.869936445s to extract the tarball
	I0520 11:46:06.952541   58316 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:06.996387   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:07.032067   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:07.032093   58316 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:46:07.032230   58316 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.032216   58316 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 11:46:07.032350   58316 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.032411   58316 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.032692   58316 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.033198   58316 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034239   58316 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.034327   58316 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 11:46:07.034404   58316 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.034442   58316 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.207835   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.209772   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 11:46:07.278794   58316 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 11:46:07.278843   58316 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.278895   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.284049   58316 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 11:46:07.284098   58316 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 11:46:07.284149   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.286345   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.289361   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 11:46:07.331714   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 11:46:07.337994   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 11:46:07.373533   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.380715   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.385188   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.390552   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.410661   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.481535   58316 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 11:46:07.481576   58316 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.481626   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494357   58316 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 11:46:07.494374   58316 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 11:46:07.494402   58316 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.494404   58316 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.507981   58316 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 11:46:07.508023   58316 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.508069   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.516898   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.517006   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.517019   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.517060   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.517248   58316 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 11:46:07.517282   58316 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.517309   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.581093   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.581200   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 11:46:07.622963   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 11:46:07.623008   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 11:46:07.623069   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 11:46:07.632151   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 11:46:07.915373   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:08.058938   58316 cache_images.go:92] duration metric: took 1.026816798s to LoadCachedImages
	W0520 11:46:08.059038   58316 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0520 11:46:08.059057   58316 kubeadm.go:928] updating node { 192.168.83.100 8443 v1.20.0 crio true true} ...
	I0520 11:46:08.059208   58316 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:08.059295   58316 ssh_runner.go:195] Run: crio config
	I0520 11:46:08.131486   58316 cni.go:84] Creating CNI manager for ""
	I0520 11:46:08.131511   58316 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:08.131532   58316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:08.131561   58316 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.100 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210420 NodeName:old-k8s-version-210420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:46:08.131724   58316 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210420"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:08.131800   58316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:46:08.143109   58316 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:08.143202   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:08.153988   58316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0520 11:46:08.172321   58316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:08.190157   58316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0520 11:46:08.208084   58316 ssh_runner.go:195] Run: grep 192.168.83.100	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:08.212211   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:08.225134   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:08.350083   58316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:08.368185   58316 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420 for IP: 192.168.83.100
	I0520 11:46:08.368202   58316 certs.go:194] generating shared ca certs ...
	I0520 11:46:08.368216   58316 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.368372   58316 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:08.368422   58316 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:08.368435   58316 certs.go:256] generating profile certs ...
	I0520 11:46:08.368552   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key
	I0520 11:46:08.368620   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9
	I0520 11:46:08.368664   58316 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key
	I0520 11:46:08.368806   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:08.368839   58316 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:08.368852   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:08.368884   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:08.368914   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:08.368967   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:08.369020   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:08.369674   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:08.402860   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:08.432963   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:08.458740   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:08.490622   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 11:46:08.529627   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:08.563064   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:08.604332   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 11:46:08.632006   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:08.656708   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:08.680552   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:08.704942   58316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:08.723543   58316 ssh_runner.go:195] Run: openssl version
	I0520 11:46:08.729776   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:08.742608   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747374   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747417   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.753504   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:08.765976   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:08.777365   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781823   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.787550   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:08.798382   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:08.810174   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814810   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.820897   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:08.832012   58316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:08.836507   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:08.842727   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:08.849208   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:08.855730   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:08.862050   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:08.867819   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:08.873633   58316 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:08.873750   58316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:08.873800   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.912922   58316 cri.go:89] found id: ""
	I0520 11:46:08.913010   58316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:08.924046   58316 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:08.924084   58316 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:08.924092   58316 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:08.924154   58316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:08.934812   58316 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:08.936218   58316 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210420" does not appear in /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:08.937232   58316 kubeconfig.go:62] /home/jenkins/minikube-integration/18925-3819/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210420" cluster setting kubeconfig missing "old-k8s-version-210420" context setting]
	I0520 11:46:08.938681   58316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.947598   58316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:08.958245   58316 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.100
	I0520 11:46:08.958280   58316 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:08.958293   58316 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:08.958349   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.996333   58316 cri.go:89] found id: ""
	I0520 11:46:08.996413   58316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:09.015359   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:09.026711   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:09.026733   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:09.026784   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:09.036886   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:09.036954   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:09.046584   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:09.056035   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:09.056095   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:09.067501   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.077064   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:09.077123   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.086820   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:09.096267   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:09.096346   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:09.106407   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:09.116143   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.240367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.968603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.251776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.363870   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.466177   58316 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:10.466284   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:10.967012   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:11.467113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:09.121909   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:09.122323   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:09.122358   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:09.122261   59335 retry.go:31] will retry after 1.49678652s: waiting for machine to come up
	I0520 11:46:10.620875   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:10.621226   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:10.621255   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:10.621184   59335 retry.go:31] will retry after 2.772139112s: waiting for machine to come up
	I0520 11:46:11.966365   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.466509   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.966500   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.467280   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.966838   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.466319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.966768   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.466848   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.966953   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:16.466890   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.396309   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:13.396705   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:13.396725   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:13.396678   59335 retry.go:31] will retry after 2.987909039s: waiting for machine to come up
	I0520 11:46:16.386007   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:16.386500   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:16.386527   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:16.386460   59335 retry.go:31] will retry after 4.351760111s: waiting for machine to come up
	I0520 11:46:16.967036   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.466679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.966631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.466439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.966783   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.967319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.466315   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.966513   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:21.467069   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.138009   58840 start.go:364] duration metric: took 2m20.094710429s to acquireMachinesLock for "default-k8s-diff-port-668445"
	I0520 11:46:22.138085   58840 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:22.138096   58840 fix.go:54] fixHost starting: 
	I0520 11:46:22.138535   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:22.138577   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:22.154345   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0520 11:46:22.154701   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:22.155172   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:22.155198   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:22.155492   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:22.155667   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:22.155834   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:22.157486   58840 fix.go:112] recreateIfNeeded on default-k8s-diff-port-668445: state=Stopped err=<nil>
	I0520 11:46:22.157525   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	W0520 11:46:22.157703   58840 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:22.159724   58840 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-668445" ...
	I0520 11:46:20.742954   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.743480   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has current primary IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.743503   58398 main.go:141] libmachine: (embed-certs-274976) Found IP for machine: 192.168.50.167
	I0520 11:46:20.743517   58398 main.go:141] libmachine: (embed-certs-274976) Reserving static IP address...
	I0520 11:46:20.743907   58398 main.go:141] libmachine: (embed-certs-274976) Reserved static IP address: 192.168.50.167
	I0520 11:46:20.743930   58398 main.go:141] libmachine: (embed-certs-274976) Waiting for SSH to be available...
	I0520 11:46:20.743955   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "embed-certs-274976", mac: "52:54:00:8b:95:29", ip: "192.168.50.167"} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.744000   58398 main.go:141] libmachine: (embed-certs-274976) DBG | skip adding static IP to network mk-embed-certs-274976 - found existing host DHCP lease matching {name: "embed-certs-274976", mac: "52:54:00:8b:95:29", ip: "192.168.50.167"}
	I0520 11:46:20.744017   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Getting to WaitForSSH function...
	I0520 11:46:20.746685   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.747071   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.747112   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.747244   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Using SSH client type: external
	I0520 11:46:20.747259   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa (-rw-------)
	I0520 11:46:20.747278   58398 main.go:141] libmachine: (embed-certs-274976) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:46:20.747287   58398 main.go:141] libmachine: (embed-certs-274976) DBG | About to run SSH command:
	I0520 11:46:20.747295   58398 main.go:141] libmachine: (embed-certs-274976) DBG | exit 0
	I0520 11:46:20.872997   58398 main.go:141] libmachine: (embed-certs-274976) DBG | SSH cmd err, output: <nil>: 
	I0520 11:46:20.873377   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetConfigRaw
	I0520 11:46:20.874039   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:20.876476   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.876827   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.876859   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.877169   58398 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/config.json ...
	I0520 11:46:20.877407   58398 machine.go:94] provisionDockerMachine start ...
	I0520 11:46:20.877432   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:20.877643   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:20.879744   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.880089   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.880113   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.880328   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:20.880529   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.880673   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.880840   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:20.881034   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:20.881263   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:20.881275   58398 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:46:20.993535   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:46:20.993572   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:20.993813   58398 buildroot.go:166] provisioning hostname "embed-certs-274976"
	I0520 11:46:20.993841   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:20.994034   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:20.996567   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.996922   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.996970   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.997092   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:20.997280   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.997437   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.997580   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:20.997734   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:20.997993   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:20.998013   58398 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274976 && echo "embed-certs-274976" | sudo tee /etc/hostname
	I0520 11:46:21.125482   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274976
	
	I0520 11:46:21.125512   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.128347   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.128717   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.128746   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.128916   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.129120   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.129296   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.129436   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.129613   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:21.129774   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:21.129789   58398 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274976/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:46:21.250287   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:46:21.250320   58398 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:46:21.250371   58398 buildroot.go:174] setting up certificates
	I0520 11:46:21.250385   58398 provision.go:84] configureAuth start
	I0520 11:46:21.250403   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:21.250726   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:21.253610   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.254038   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.254053   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.254303   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.256633   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.257007   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.257046   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.257206   58398 provision.go:143] copyHostCerts
	I0520 11:46:21.257266   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:46:21.257289   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:46:21.257369   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:46:21.257489   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:46:21.257499   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:46:21.257537   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:46:21.257617   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:46:21.257626   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:46:21.257661   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:46:21.257729   58398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274976 san=[127.0.0.1 192.168.50.167 embed-certs-274976 localhost minikube]
	I0520 11:46:21.458096   58398 provision.go:177] copyRemoteCerts
	I0520 11:46:21.458152   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:21.458178   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.460662   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.461006   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.461042   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.461205   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.461370   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.461545   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.461705   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:21.547703   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:21.574345   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 11:46:21.598843   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:21.624064   58398 provision.go:87] duration metric: took 373.664517ms to configureAuth
	I0520 11:46:21.624091   58398 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:21.624262   58398 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:21.624330   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.626996   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.627378   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.627406   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.627556   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.627711   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.627902   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.628077   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.628231   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:21.628402   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:21.628420   58398 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:21.894896   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:21.894923   58398 machine.go:97] duration metric: took 1.017499149s to provisionDockerMachine
	I0520 11:46:21.894936   58398 start.go:293] postStartSetup for "embed-certs-274976" (driver="kvm2")
	I0520 11:46:21.894948   58398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:21.894966   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:21.895279   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:21.895304   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.897866   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.898160   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.898193   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.898292   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.898484   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.898663   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.898779   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:21.984571   58398 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:21.988891   58398 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:21.988918   58398 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:21.988995   58398 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:21.989129   58398 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:21.989257   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:21.999219   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:22.023032   58398 start.go:296] duration metric: took 128.082095ms for postStartSetup
	I0520 11:46:22.023076   58398 fix.go:56] duration metric: took 21.277188857s for fixHost
	I0520 11:46:22.023094   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.025971   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.026362   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.026393   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.026536   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.026725   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.026867   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.027017   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.027246   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:22.027424   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:22.027437   58398 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:22.137813   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205582.113779638
	
	I0520 11:46:22.137840   58398 fix.go:216] guest clock: 1716205582.113779638
	I0520 11:46:22.137850   58398 fix.go:229] Guest: 2024-05-20 11:46:22.113779638 +0000 UTC Remote: 2024-05-20 11:46:22.023079043 +0000 UTC m=+224.083510648 (delta=90.700595ms)
	I0520 11:46:22.137898   58398 fix.go:200] guest clock delta is within tolerance: 90.700595ms
	I0520 11:46:22.137906   58398 start.go:83] releasing machines lock for "embed-certs-274976", held for 21.39205527s
	I0520 11:46:22.137943   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.138222   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:22.140951   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.141358   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.141388   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.141558   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142024   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142187   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142245   58398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:22.142291   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.142395   58398 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:22.142417   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.144896   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145116   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145386   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.145419   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145490   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.145518   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145525   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.145647   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.145703   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.145827   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.145849   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.146019   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.146022   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:22.146175   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	W0520 11:46:22.226345   58398 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:22.226421   58398 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:22.248678   58398 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:22.404298   58398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:22.410174   58398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:22.410249   58398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:22.426286   58398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:22.426311   58398 start.go:494] detecting cgroup driver to use...
	I0520 11:46:22.426369   58398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:22.441746   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:22.455445   58398 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:22.455515   58398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:22.469441   58398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:22.485963   58398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:22.604443   58398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:22.758542   58398 docker.go:233] disabling docker service ...
	I0520 11:46:22.758628   58398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:22.773371   58398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:22.786871   58398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:22.940022   58398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:23.076857   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:23.091980   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:23.111380   58398 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:46:23.111434   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.122532   58398 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:23.122597   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.134355   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.145484   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.156776   58398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:23.169017   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.180681   58398 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.205664   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.217267   58398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:23.228490   58398 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:23.228555   58398 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:23.243578   58398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:23.255394   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:23.387407   58398 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:23.537543   58398 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:23.537613   58398 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:23.542886   58398 start.go:562] Will wait 60s for crictl version
	I0520 11:46:23.542934   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:46:23.546740   58398 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:23.587662   58398 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:23.587746   58398 ssh_runner.go:195] Run: crio --version
	I0520 11:46:23.615735   58398 ssh_runner.go:195] Run: crio --version
	I0520 11:46:23.648727   58398 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:46:21.966776   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.466411   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.967037   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.467218   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.966606   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.466990   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.966805   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.966829   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:26.466968   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.161646   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Start
	I0520 11:46:22.161813   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring networks are active...
	I0520 11:46:22.162609   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring network default is active
	I0520 11:46:22.163020   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring network mk-default-k8s-diff-port-668445 is active
	I0520 11:46:22.163509   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Getting domain xml...
	I0520 11:46:22.164146   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Creating domain...
	I0520 11:46:23.456104   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting to get IP...
	I0520 11:46:23.457157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.457703   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.457735   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:23.457652   59511 retry.go:31] will retry after 262.047261ms: waiting for machine to come up
	I0520 11:46:23.721329   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.721756   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.721815   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:23.721742   59511 retry.go:31] will retry after 301.868596ms: waiting for machine to come up
	I0520 11:46:24.025477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.026042   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.026072   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:24.026006   59511 retry.go:31] will retry after 431.470276ms: waiting for machine to come up
	I0520 11:46:24.458962   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.459477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.459564   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:24.459450   59511 retry.go:31] will retry after 566.093902ms: waiting for machine to come up
	I0520 11:46:25.027215   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.027787   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.027812   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:25.027729   59511 retry.go:31] will retry after 471.2178ms: waiting for machine to come up
	I0520 11:46:25.500575   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.501067   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.501106   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:25.501010   59511 retry.go:31] will retry after 682.542082ms: waiting for machine to come up
	I0520 11:46:26.184886   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:26.185367   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:26.185400   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:26.185315   59511 retry.go:31] will retry after 973.33325ms: waiting for machine to come up
	I0520 11:46:23.650320   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:23.653452   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:23.653919   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:23.653962   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:23.654259   58398 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:23.658592   58398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:23.671310   58398 kubeadm.go:877] updating cluster {Name:embed-certs-274976 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:23.671460   58398 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:46:23.671523   58398 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:23.715873   58398 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:46:23.715947   58398 ssh_runner.go:195] Run: which lz4
	I0520 11:46:23.720362   58398 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:23.724821   58398 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:23.724852   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:46:25.254824   58398 crio.go:462] duration metric: took 1.534501958s to copy over tarball
	I0520 11:46:25.254937   58398 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:27.527739   58398 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.272764169s)
	I0520 11:46:27.527773   58398 crio.go:469] duration metric: took 2.272914971s to extract the tarball
	I0520 11:46:27.527780   58398 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:27.564738   58398 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:27.613196   58398 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:46:27.613220   58398 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:46:27.613229   58398 kubeadm.go:928] updating node { 192.168.50.167 8443 v1.30.1 crio true true} ...
	I0520 11:46:27.613321   58398 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:27.613381   58398 ssh_runner.go:195] Run: crio config
	I0520 11:46:27.669907   58398 cni.go:84] Creating CNI manager for ""
	I0520 11:46:27.669931   58398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:27.669948   58398 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:27.669969   58398 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.167 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274976 NodeName:embed-certs-274976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:46:27.670117   58398 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274976"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:27.670180   58398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:46:27.680414   58398 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:27.680473   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:27.690431   58398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0520 11:46:27.707625   58398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:27.724120   58398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0520 11:46:27.744218   58398 ssh_runner.go:195] Run: grep 192.168.50.167	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:27.748173   58398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:27.760752   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:27.891061   58398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:27.909326   58398 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976 for IP: 192.168.50.167
	I0520 11:46:27.909354   58398 certs.go:194] generating shared ca certs ...
	I0520 11:46:27.909387   58398 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:27.909574   58398 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:27.909637   58398 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:27.909650   58398 certs.go:256] generating profile certs ...
	I0520 11:46:27.909754   58398 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/client.key
	I0520 11:46:27.909843   58398 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.key.d6be4a1f
	I0520 11:46:27.909922   58398 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.key
	I0520 11:46:27.910072   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:27.910127   58398 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:27.910146   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:27.910183   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:27.910208   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:27.910229   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:27.910278   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:27.911030   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:27.949250   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:26.966832   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.466585   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.967225   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.466286   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.966679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.466425   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.966440   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.467175   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.966457   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.466596   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.160027   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:27.160492   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:27.160515   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:27.160435   59511 retry.go:31] will retry after 1.263563226s: waiting for machine to come up
	I0520 11:46:28.426016   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:28.426503   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:28.426555   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:28.426463   59511 retry.go:31] will retry after 1.407211588s: waiting for machine to come up
	I0520 11:46:29.835680   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:29.836101   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:29.836121   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:29.836058   59511 retry.go:31] will retry after 2.320334078s: waiting for machine to come up
	I0520 11:46:27.987645   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:28.036652   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:28.083998   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0520 11:46:28.116104   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:28.154695   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:28.181013   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:46:28.213382   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:28.243550   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:28.268537   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:28.292856   58398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:28.310801   58398 ssh_runner.go:195] Run: openssl version
	I0520 11:46:28.316724   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:28.327461   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.332286   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.332345   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.339005   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:28.351192   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:28.362639   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.367566   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.367623   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.373275   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:28.384549   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:28.395600   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.400136   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.400188   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.405914   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:28.417533   58398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:28.422327   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:28.428746   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:28.434840   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:28.441227   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:28.447310   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:28.453450   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:28.462637   58398 kubeadm.go:391] StartCluster: {Name:embed-certs-274976 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:28.462746   58398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:28.462828   58398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:28.501759   58398 cri.go:89] found id: ""
	I0520 11:46:28.501826   58398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:28.512536   58398 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:28.512554   58398 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:28.512559   58398 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:28.512603   58398 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:28.522288   58398 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:28.523266   58398 kubeconfig.go:125] found "embed-certs-274976" server: "https://192.168.50.167:8443"
	I0520 11:46:28.525188   58398 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:28.535056   58398 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.167
	I0520 11:46:28.535092   58398 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:28.535104   58398 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:28.535157   58398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:28.573317   58398 cri.go:89] found id: ""
	I0520 11:46:28.573429   58398 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:28.593111   58398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:28.603312   58398 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:28.603345   58398 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:28.603398   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:28.612794   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:28.612870   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:28.622819   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:28.634550   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:28.634614   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:28.644568   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:28.654918   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:28.654980   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:28.665581   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:28.677090   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:28.677167   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:28.687152   58398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:28.697505   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:28.833183   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:29.963435   58398 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.130211655s)
	I0520 11:46:29.963486   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.185882   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.248946   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.334924   58398 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:30.335020   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.835637   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.336028   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.354314   58398 api_server.go:72] duration metric: took 1.01937754s to wait for apiserver process to appear ...
	I0520 11:46:31.354343   58398 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:46:31.354365   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:31.354916   58398 api_server.go:269] stopped: https://192.168.50.167:8443/healthz: Get "https://192.168.50.167:8443/healthz": dial tcp 192.168.50.167:8443: connect: connection refused
	I0520 11:46:31.854629   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.299609   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.299653   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.299669   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.370279   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.370304   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.370317   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.378426   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.378448   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.855034   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.860164   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:34.860205   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:35.354583   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:35.362461   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:35.362489   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:35.854836   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:35.859322   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I0520 11:46:35.867050   58398 api_server.go:141] control plane version: v1.30.1
	I0520 11:46:35.867077   58398 api_server.go:131] duration metric: took 4.512726716s to wait for apiserver health ...
	I0520 11:46:35.867088   58398 cni.go:84] Creating CNI manager for ""
	I0520 11:46:35.867096   58398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:35.869427   58398 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:46:31.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.467311   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.966392   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.967113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.467393   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.966336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.466951   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.967302   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:36.467339   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.158195   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:32.158685   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:32.158712   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:32.158637   59511 retry.go:31] will retry after 2.733920459s: waiting for machine to come up
	I0520 11:46:34.893892   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:34.894436   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:34.894480   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:34.894374   59511 retry.go:31] will retry after 3.119057685s: waiting for machine to come up
	I0520 11:46:35.871058   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:46:35.898211   58398 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:46:35.942295   58398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:46:35.954120   58398 system_pods.go:59] 8 kube-system pods found
	I0520 11:46:35.954156   58398 system_pods.go:61] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:46:35.954171   58398 system_pods.go:61] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:46:35.954181   58398 system_pods.go:61] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:46:35.954194   58398 system_pods.go:61] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:46:35.954204   58398 system_pods.go:61] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0520 11:46:35.954213   58398 system_pods.go:61] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:46:35.954220   58398 system_pods.go:61] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:46:35.954229   58398 system_pods.go:61] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 11:46:35.954239   58398 system_pods.go:74] duration metric: took 11.924168ms to wait for pod list to return data ...
	I0520 11:46:35.954249   58398 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:46:35.957627   58398 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:46:35.957664   58398 node_conditions.go:123] node cpu capacity is 2
	I0520 11:46:35.957679   58398 node_conditions.go:105] duration metric: took 3.421729ms to run NodePressure ...
	I0520 11:46:35.957700   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:36.235314   58398 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:46:36.241075   58398 kubeadm.go:733] kubelet initialised
	I0520 11:46:36.241101   58398 kubeadm.go:734] duration metric: took 5.759765ms waiting for restarted kubelet to initialise ...
	I0520 11:46:36.241110   58398 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:36.248030   58398 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.254230   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.254263   58398 pod_ready.go:81] duration metric: took 6.20401ms for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.254274   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.254281   58398 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.259348   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "etcd-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.259379   58398 pod_ready.go:81] duration metric: took 5.088735ms for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.259392   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "etcd-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.259402   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.266963   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.266997   58398 pod_ready.go:81] duration metric: took 7.580054ms for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.267010   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.267020   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.346321   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.346353   58398 pod_ready.go:81] duration metric: took 79.314801ms for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.346363   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.346371   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.746645   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-proxy-6n9d5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.746674   58398 pod_ready.go:81] duration metric: took 400.291751ms for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.746686   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-proxy-6n9d5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.746692   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:37.146413   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.146443   58398 pod_ready.go:81] duration metric: took 399.744378ms for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:37.146454   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.146465   58398 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:37.545943   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.545966   58398 pod_ready.go:81] duration metric: took 399.491619ms for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:37.545974   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.545981   58398 pod_ready.go:38] duration metric: took 1.304861628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:37.545997   58398 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:46:37.558076   58398 ops.go:34] apiserver oom_adj: -16
	I0520 11:46:37.558097   58398 kubeadm.go:591] duration metric: took 9.045531679s to restartPrimaryControlPlane
	I0520 11:46:37.558106   58398 kubeadm.go:393] duration metric: took 9.095477048s to StartCluster
	I0520 11:46:37.558125   58398 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:37.558199   58398 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:37.559701   58398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:37.559920   58398 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:46:37.561751   58398 out.go:177] * Verifying Kubernetes components...
	I0520 11:46:37.560038   58398 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:46:37.560143   58398 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:37.563096   58398 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274976"
	I0520 11:46:37.563106   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:37.563124   58398 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274976"
	W0520 11:46:37.563133   58398 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:46:37.563156   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.563164   58398 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274976"
	I0520 11:46:37.563179   58398 addons.go:69] Setting metrics-server=true in profile "embed-certs-274976"
	I0520 11:46:37.563193   58398 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274976"
	I0520 11:46:37.563204   58398 addons.go:234] Setting addon metrics-server=true in "embed-certs-274976"
	W0520 11:46:37.563215   58398 addons.go:243] addon metrics-server should already be in state true
	I0520 11:46:37.563241   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.563516   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563558   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.563575   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563600   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.563614   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563638   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.577924   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0520 11:46:37.578096   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0520 11:46:37.578120   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0520 11:46:37.578322   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578480   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578527   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578851   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.578879   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.578982   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.579003   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.579019   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.579038   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.579202   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579342   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579364   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579385   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.579824   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.579867   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.579871   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.579895   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.582214   58398 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274976"
	W0520 11:46:37.582230   58398 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:46:37.582249   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.582511   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.582542   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.594308   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0520 11:46:37.594672   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.595047   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.595067   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.595525   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.595551   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0520 11:46:37.595709   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.596037   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.596545   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.596568   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.596980   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.597579   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.597599   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.597802   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.599963   58398 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:46:37.598457   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I0520 11:46:37.601485   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:46:37.601505   58398 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:46:37.601527   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.601793   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.602219   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.602238   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.602502   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.602688   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.604122   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.604188   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.605753   58398 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:37.604566   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.604874   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.607190   58398 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:37.607205   58398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:46:37.607221   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.607240   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.607263   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.607414   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.607729   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.610088   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.610462   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.610483   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.610738   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.610912   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.611072   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.611212   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.613655   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37269
	I0520 11:46:37.613938   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.614386   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.614403   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.614653   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.614826   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.616151   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.616331   58398 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:37.616345   58398 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:46:37.616362   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.619627   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.620050   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.620072   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.620214   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.620365   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.620480   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.620660   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.734680   58398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:37.751098   58398 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274976" to be "Ready" ...
	I0520 11:46:37.842846   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:46:37.842867   58398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:46:37.853894   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:37.862145   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:46:37.862167   58398 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:46:37.883656   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:37.884046   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:37.884061   58398 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:46:37.941086   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:38.187392   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.187415   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.187819   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.187841   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.187855   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.187861   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.187822   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.188095   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.188115   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.193695   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.193710   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.193926   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.193942   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.813454   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.813472   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.813794   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.813813   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.813821   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.813843   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.813894   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.814140   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.814154   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897046   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.897078   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.897406   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.897453   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.897469   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897494   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.897505   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.897755   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.897759   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.897772   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897784   58398 addons.go:470] Verifying addon metrics-server=true in "embed-certs-274976"
	I0520 11:46:38.899910   58398 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0520 11:46:36.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.467251   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.967153   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.967236   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.466627   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.966560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.466358   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.967165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:41.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.014993   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:38.015405   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:38.015458   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:38.015373   59511 retry.go:31] will retry after 4.505016985s: waiting for machine to come up
	I0520 11:46:38.901272   58398 addons.go:505] duration metric: took 1.341257876s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0520 11:46:39.754549   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:41.758717   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:43.865868   57519 start.go:364] duration metric: took 57.7314587s to acquireMachinesLock for "no-preload-814599"
	I0520 11:46:43.865920   57519 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:43.865931   57519 fix.go:54] fixHost starting: 
	I0520 11:46:43.866311   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:43.866347   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:43.885634   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0520 11:46:43.886068   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:43.886620   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:46:43.886659   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:43.887013   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:43.887200   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:46:43.887337   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:46:43.888902   57519 fix.go:112] recreateIfNeeded on no-preload-814599: state=Stopped err=<nil>
	I0520 11:46:43.888958   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	W0520 11:46:43.889108   57519 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:43.891052   57519 out.go:177] * Restarting existing kvm2 VM for "no-preload-814599" ...
	I0520 11:46:42.521630   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.522114   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Found IP for machine: 192.168.61.221
	I0520 11:46:42.522140   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Reserving static IP address...
	I0520 11:46:42.522157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has current primary IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.522534   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Reserved static IP address: 192.168.61.221
	I0520 11:46:42.522556   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for SSH to be available...
	I0520 11:46:42.522571   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-668445", mac: "52:54:00:55:ba:46", ip: "192.168.61.221"} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.522593   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | skip adding static IP to network mk-default-k8s-diff-port-668445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-668445", mac: "52:54:00:55:ba:46", ip: "192.168.61.221"}
	I0520 11:46:42.522613   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Getting to WaitForSSH function...
	I0520 11:46:42.524812   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.525199   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.525221   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.525362   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Using SSH client type: external
	I0520 11:46:42.525391   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa (-rw-------)
	I0520 11:46:42.525427   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:46:42.525443   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | About to run SSH command:
	I0520 11:46:42.525455   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | exit 0
	I0520 11:46:42.648689   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | SSH cmd err, output: <nil>: 
	I0520 11:46:42.649122   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetConfigRaw
	I0520 11:46:42.649745   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:42.652220   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.652571   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.652596   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.652821   58840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:46:42.653026   58840 machine.go:94] provisionDockerMachine start ...
	I0520 11:46:42.653046   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:42.653235   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.655453   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.655758   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.655777   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.655923   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.656089   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.656253   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.656384   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.656551   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.656729   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.656739   58840 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:46:42.761195   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:46:42.761225   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:42.761450   58840 buildroot.go:166] provisioning hostname "default-k8s-diff-port-668445"
	I0520 11:46:42.761477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:42.761666   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.764340   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.764670   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.764695   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.764801   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.764973   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.765131   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.765285   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.765472   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.765634   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.765647   58840 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-668445 && echo "default-k8s-diff-port-668445" | sudo tee /etc/hostname
	I0520 11:46:42.883179   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-668445
	
	I0520 11:46:42.883204   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.885913   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.886302   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.886347   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.886487   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.886699   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.886884   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.887040   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.887238   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.887419   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.887444   58840 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-668445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-668445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-668445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:46:43.001769   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:46:43.001792   58840 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:46:43.001825   58840 buildroot.go:174] setting up certificates
	I0520 11:46:43.001834   58840 provision.go:84] configureAuth start
	I0520 11:46:43.001842   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:43.002064   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:43.004606   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.004897   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.004944   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.005087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.006929   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.007270   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.007296   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.007386   58840 provision.go:143] copyHostCerts
	I0520 11:46:43.007461   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:46:43.007470   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:46:43.007523   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:46:43.007608   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:46:43.007617   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:46:43.007637   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:46:43.007686   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:46:43.007693   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:46:43.007709   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:46:43.007753   58840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-668445 san=[127.0.0.1 192.168.61.221 default-k8s-diff-port-668445 localhost minikube]
	I0520 11:46:43.165780   58840 provision.go:177] copyRemoteCerts
	I0520 11:46:43.165835   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:43.165860   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.168782   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.169136   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.169181   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.169343   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.169575   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.169717   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.169892   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.252175   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:43.280313   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0520 11:46:43.306901   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:43.331191   58840 provision.go:87] duration metric: took 329.344445ms to configureAuth
	I0520 11:46:43.331227   58840 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:43.331478   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:43.331576   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.334173   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.334573   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.334603   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.334770   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.334961   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.335108   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.335236   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.335381   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:43.335593   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:43.335615   58840 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:43.630180   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:43.630213   58840 machine.go:97] duration metric: took 977.172725ms to provisionDockerMachine
	I0520 11:46:43.630229   58840 start.go:293] postStartSetup for "default-k8s-diff-port-668445" (driver="kvm2")
	I0520 11:46:43.630244   58840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:43.630270   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.630597   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:43.630635   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.633307   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.633650   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.633680   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.633798   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.633976   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.634141   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.634264   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.719350   58840 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:43.723320   58840 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:43.723339   58840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:43.723396   58840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:43.723478   58840 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:43.723564   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:43.732422   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:43.757138   58840 start.go:296] duration metric: took 126.894206ms for postStartSetup
	I0520 11:46:43.757179   58840 fix.go:56] duration metric: took 21.619082215s for fixHost
	I0520 11:46:43.757202   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.759486   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.759860   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.759887   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.760065   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.760248   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.760425   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.760527   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.760654   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:43.760824   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:43.760839   58840 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:43.865691   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205603.841593335
	
	I0520 11:46:43.865713   58840 fix.go:216] guest clock: 1716205603.841593335
	I0520 11:46:43.865722   58840 fix.go:229] Guest: 2024-05-20 11:46:43.841593335 +0000 UTC Remote: 2024-05-20 11:46:43.757183475 +0000 UTC m=+161.848941663 (delta=84.40986ms)
	I0520 11:46:43.865777   58840 fix.go:200] guest clock delta is within tolerance: 84.40986ms
	I0520 11:46:43.865790   58840 start.go:83] releasing machines lock for "default-k8s-diff-port-668445", held for 21.727726024s
	I0520 11:46:43.865824   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.866087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:43.868681   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.869054   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.869076   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.869256   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.869792   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.869961   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.870035   58840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:43.870091   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.870174   58840 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:43.870195   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.872817   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873059   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873146   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.873253   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873487   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.873496   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.873537   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873684   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.873687   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.873861   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.873866   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.874033   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.874046   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.874234   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	W0520 11:46:43.973840   58840 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:43.973913   58840 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:43.980706   58840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:44.132625   58840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:44.140558   58840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:44.140629   58840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:44.158922   58840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:44.158946   58840 start.go:494] detecting cgroup driver to use...
	I0520 11:46:44.159006   58840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:44.179206   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:44.194183   58840 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:44.194254   58840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:44.208701   58840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:44.223086   58840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:44.342811   58840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:44.491358   58840 docker.go:233] disabling docker service ...
	I0520 11:46:44.491428   58840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:44.509165   58840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:44.526088   58840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:44.700452   58840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:44.833129   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:44.849090   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:44.870148   58840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:46:44.870222   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.880951   58840 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:44.881016   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.895038   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.908245   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.918844   58840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:44.929838   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.940446   58840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.960350   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.974587   58840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:44.985689   58840 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:44.985748   58840 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:44.999929   58840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:45.010732   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:45.140038   58840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:45.291656   58840 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:45.291728   58840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:45.297351   58840 start.go:562] Will wait 60s for crictl version
	I0520 11:46:45.297412   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:46:45.301297   58840 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:45.343635   58840 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:45.343703   58840 ssh_runner.go:195] Run: crio --version
	I0520 11:46:45.373914   58840 ssh_runner.go:195] Run: crio --version
	I0520 11:46:45.404596   58840 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:46:43.892523   57519 main.go:141] libmachine: (no-preload-814599) Calling .Start
	I0520 11:46:43.892686   57519 main.go:141] libmachine: (no-preload-814599) Ensuring networks are active...
	I0520 11:46:43.893370   57519 main.go:141] libmachine: (no-preload-814599) Ensuring network default is active
	I0520 11:46:43.893729   57519 main.go:141] libmachine: (no-preload-814599) Ensuring network mk-no-preload-814599 is active
	I0520 11:46:43.894105   57519 main.go:141] libmachine: (no-preload-814599) Getting domain xml...
	I0520 11:46:43.894847   57519 main.go:141] libmachine: (no-preload-814599) Creating domain...
	I0520 11:46:45.229992   57519 main.go:141] libmachine: (no-preload-814599) Waiting to get IP...
	I0520 11:46:45.230981   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.231451   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.231518   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.231438   59753 retry.go:31] will retry after 187.539159ms: waiting for machine to come up
	I0520 11:46:45.420854   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.421369   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.421393   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.421328   59753 retry.go:31] will retry after 327.709711ms: waiting for machine to come up
	I0520 11:46:45.750799   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.751384   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.751414   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.751334   59753 retry.go:31] will retry after 445.096849ms: waiting for machine to come up
	I0520 11:46:46.197899   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:46.198567   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:46.198598   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:46.198536   59753 retry.go:31] will retry after 561.029629ms: waiting for machine to come up
	I0520 11:46:41.967129   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.467212   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.966637   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.466403   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.966371   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.466743   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.967293   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.467205   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:46.466728   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.405909   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:45.408865   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:45.409399   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:45.409429   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:45.409655   58840 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:45.413815   58840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:45.426264   58840 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:45.426366   58840 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:46:45.426405   58840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:45.468938   58840 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:46:45.469021   58840 ssh_runner.go:195] Run: which lz4
	I0520 11:46:45.474957   58840 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:45.481006   58840 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:45.481039   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:46:44.255728   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:44.756897   58398 node_ready.go:49] node "embed-certs-274976" has status "Ready":"True"
	I0520 11:46:44.756963   58398 node_ready.go:38] duration metric: took 7.005828991s for node "embed-certs-274976" to be "Ready" ...
	I0520 11:46:44.756978   58398 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:44.767584   58398 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:44.774049   58398 pod_ready.go:92] pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:44.774078   58398 pod_ready.go:81] duration metric: took 6.468023ms for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:44.774099   58398 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.795658   58398 pod_ready.go:92] pod "etcd-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:46.795691   58398 pod_ready.go:81] duration metric: took 2.021582833s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.795705   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.810711   58398 pod_ready.go:92] pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:46.810739   58398 pod_ready.go:81] duration metric: took 15.025215ms for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.810751   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.760855   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:46.761534   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:46.761579   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:46.761484   59753 retry.go:31] will retry after 497.075791ms: waiting for machine to come up
	I0520 11:46:47.260059   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:47.260542   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:47.260580   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:47.260479   59753 retry.go:31] will retry after 910.980097ms: waiting for machine to come up
	I0520 11:46:48.173677   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:48.174161   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:48.174189   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:48.174108   59753 retry.go:31] will retry after 931.969789ms: waiting for machine to come up
	I0520 11:46:49.107672   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:49.108241   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:49.108271   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:49.108194   59753 retry.go:31] will retry after 1.075042352s: waiting for machine to come up
	I0520 11:46:50.184518   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:50.185078   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:50.185108   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:50.185025   59753 retry.go:31] will retry after 1.174646184s: waiting for machine to come up
	I0520 11:46:46.967320   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.466459   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.966641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.467404   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.966845   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.966601   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.466616   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.967389   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:51.466668   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.010637   58840 crio.go:462] duration metric: took 1.535717978s to copy over tarball
	I0520 11:46:47.010719   58840 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:49.308712   58840 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.297962166s)
	I0520 11:46:49.308747   58840 crio.go:469] duration metric: took 2.298080001s to extract the tarball
	I0520 11:46:49.308756   58840 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:49.346851   58840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:49.393090   58840 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:46:49.393118   58840 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:46:49.393126   58840 kubeadm.go:928] updating node { 192.168.61.221 8444 v1.30.1 crio true true} ...
	I0520 11:46:49.393247   58840 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-668445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:49.393326   58840 ssh_runner.go:195] Run: crio config
	I0520 11:46:49.438553   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:46:49.438575   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:49.438592   58840 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:49.438618   58840 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.221 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-668445 NodeName:default-k8s-diff-port-668445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:46:49.438792   58840 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-668445"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:49.438876   58840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:46:49.449415   58840 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:49.449487   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:49.460951   58840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0520 11:46:49.479654   58840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:49.497494   58840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0520 11:46:49.518474   58840 ssh_runner.go:195] Run: grep 192.168.61.221	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:49.522598   58840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:49.537237   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:49.686022   58840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:49.712850   58840 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445 for IP: 192.168.61.221
	I0520 11:46:49.712871   58840 certs.go:194] generating shared ca certs ...
	I0520 11:46:49.712885   58840 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:49.713065   58840 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:49.713110   58840 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:49.713123   58840 certs.go:256] generating profile certs ...
	I0520 11:46:49.713261   58840 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/client.key
	I0520 11:46:49.713346   58840 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.key.50005da2
	I0520 11:46:49.713390   58840 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.key
	I0520 11:46:49.713548   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:49.713583   58840 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:49.713597   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:49.713629   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:49.713656   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:49.713686   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:49.713744   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:49.714370   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:49.751367   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:49.787162   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:49.822660   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:49.863401   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 11:46:49.895295   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:49.928089   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:49.952687   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:46:49.982562   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:50.007234   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:50.031099   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:50.055146   58840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:50.073066   58840 ssh_runner.go:195] Run: openssl version
	I0520 11:46:50.080951   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:50.092134   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.097436   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.097496   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.103714   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:50.114321   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:50.125104   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.131225   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.131276   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.139113   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:50.153959   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:50.165441   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.170302   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.170349   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.176388   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:50.188372   58840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:50.193168   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:50.199298   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:50.208100   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:50.214704   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:50.220565   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:50.226463   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:50.232400   58840 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:50.232472   58840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:50.232532   58840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:50.271977   58840 cri.go:89] found id: ""
	I0520 11:46:50.272070   58840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:50.283676   58840 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:50.283699   58840 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:50.283706   58840 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:50.283753   58840 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:50.296808   58840 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:50.298406   58840 kubeconfig.go:125] found "default-k8s-diff-port-668445" server: "https://192.168.61.221:8444"
	I0520 11:46:50.300563   58840 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:50.310940   58840 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.221
	I0520 11:46:50.310964   58840 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:50.310975   58840 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:50.311010   58840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:50.361321   58840 cri.go:89] found id: ""
	I0520 11:46:50.361400   58840 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:50.382699   58840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:50.397635   58840 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:50.397657   58840 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:50.397707   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0520 11:46:50.407303   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:50.407364   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:50.418331   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0520 11:46:50.427889   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:50.427938   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:50.437649   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0520 11:46:50.446810   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:50.446861   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:50.456428   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0520 11:46:50.465444   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:50.465512   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:50.478585   58840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:50.489053   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:50.607045   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.346500   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.594241   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.661179   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.751253   58840 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:51.751331   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.318166   58398 pod_ready.go:92] pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.318193   58398 pod_ready.go:81] duration metric: took 1.507434439s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.318207   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.323814   58398 pod_ready.go:92] pod "kube-proxy-6n9d5" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.323850   58398 pod_ready.go:81] duration metric: took 5.632938ms for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.323865   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.356208   58398 pod_ready.go:92] pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.356235   58398 pod_ready.go:81] duration metric: took 32.360735ms for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.356247   58398 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:50.363910   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:52.364530   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:51.360905   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:51.482451   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:51.482493   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:51.361293   59753 retry.go:31] will retry after 1.455979778s: waiting for machine to come up
	I0520 11:46:52.818580   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:52.819134   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:52.819162   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:52.819073   59753 retry.go:31] will retry after 2.771639261s: waiting for machine to come up
	I0520 11:46:55.593581   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:55.594138   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:55.594167   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:55.594089   59753 retry.go:31] will retry after 2.881341208s: waiting for machine to come up
	I0520 11:46:51.967150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.466994   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.967190   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.467334   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.967039   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.466506   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.966828   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.967263   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:56.466949   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.252101   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.752498   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.794795   58840 api_server.go:72] duration metric: took 1.043540264s to wait for apiserver process to appear ...
	I0520 11:46:52.794823   58840 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:46:52.794847   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:52.795569   58840 api_server.go:269] stopped: https://192.168.61.221:8444/healthz: Get "https://192.168.61.221:8444/healthz": dial tcp 192.168.61.221:8444: connect: connection refused
	I0520 11:46:53.294972   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.015833   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:56.015872   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:56.015889   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.053674   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:56.053705   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:56.294988   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.300440   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:56.300474   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:56.795720   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.813192   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:56.813221   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:57.295045   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:57.298962   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:57.298994   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:57.795590   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:57.800795   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 200:
	ok
	I0520 11:46:57.809060   58840 api_server.go:141] control plane version: v1.30.1
	I0520 11:46:57.809084   58840 api_server.go:131] duration metric: took 5.014253865s to wait for apiserver health ...
	I0520 11:46:57.809092   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:46:57.809098   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:57.811020   58840 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:46:54.864263   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:57.363797   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:57.812202   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:46:57.824260   58840 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:46:57.843189   58840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:46:57.851978   58840 system_pods.go:59] 8 kube-system pods found
	I0520 11:46:57.852012   58840 system_pods.go:61] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:46:57.852029   58840 system_pods.go:61] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:46:57.852035   58840 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:46:57.852042   58840 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:46:57.852046   58840 system_pods.go:61] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:46:57.852055   58840 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:46:57.852063   58840 system_pods.go:61] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:46:57.852067   58840 system_pods.go:61] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:46:57.852073   58840 system_pods.go:74] duration metric: took 8.872046ms to wait for pod list to return data ...
	I0520 11:46:57.852084   58840 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:46:57.855020   58840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:46:57.855043   58840 node_conditions.go:123] node cpu capacity is 2
	I0520 11:46:57.855052   58840 node_conditions.go:105] duration metric: took 2.95984ms to run NodePressure ...
	I0520 11:46:57.855065   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:58.124410   58840 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:46:58.129201   58840 kubeadm.go:733] kubelet initialised
	I0520 11:46:58.129222   58840 kubeadm.go:734] duration metric: took 4.789292ms waiting for restarted kubelet to initialise ...
	I0520 11:46:58.129229   58840 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:58.134883   58840 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.139522   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.139557   58840 pod_ready.go:81] duration metric: took 4.638772ms for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.139569   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.139583   58840 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.143636   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.143657   58840 pod_ready.go:81] duration metric: took 4.061011ms for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.143669   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.143677   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.147479   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.147499   58840 pod_ready.go:81] duration metric: took 3.813494ms for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.147510   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.147517   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.247677   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.247710   58840 pod_ready.go:81] duration metric: took 100.183226ms for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.247723   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.247730   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.648289   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-proxy-7ddtk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.648339   58840 pod_ready.go:81] duration metric: took 400.599602ms for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.648352   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-proxy-7ddtk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.648365   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:59.047071   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.047099   58840 pod_ready.go:81] duration metric: took 398.721478ms for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:59.047108   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.047114   58840 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:59.447840   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.447866   58840 pod_ready.go:81] duration metric: took 400.738125ms for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:59.447876   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.447885   58840 pod_ready.go:38] duration metric: took 1.318647527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:59.447899   58840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:46:59.459985   58840 ops.go:34] apiserver oom_adj: -16
	I0520 11:46:59.460011   58840 kubeadm.go:591] duration metric: took 9.176298972s to restartPrimaryControlPlane
	I0520 11:46:59.460023   58840 kubeadm.go:393] duration metric: took 9.227629245s to StartCluster
	I0520 11:46:59.460038   58840 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:59.460120   58840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:59.461897   58840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:59.462172   58840 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:46:59.464109   58840 out.go:177] * Verifying Kubernetes components...
	I0520 11:46:59.462263   58840 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:46:59.462375   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:59.465582   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:59.464212   58840 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465645   58840 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.465657   58840 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:46:59.465693   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.464218   58840 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465752   58840 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.465768   58840 addons.go:243] addon metrics-server should already be in state true
	I0520 11:46:59.465794   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.464219   58840 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465877   58840 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-668445"
	I0520 11:46:59.466087   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466116   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.466210   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466216   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466232   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.466235   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.481963   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0520 11:46:59.482030   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45203
	I0520 11:46:59.482551   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.482768   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.483115   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.483140   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.483495   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.483516   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.483759   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.483930   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.483979   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.484484   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.484509   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.485272   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0520 11:46:59.485657   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.486686   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.486716   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.487080   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.487552   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.487588   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.487989   58840 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.488007   58840 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:46:59.488037   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.488349   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.488384   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.500796   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0520 11:46:59.501376   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.501793   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0520 11:46:59.502058   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.502098   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.502308   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.502439   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.502661   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.502737   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0520 11:46:59.502976   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.503009   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.503199   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.503329   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.503548   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.503648   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.503659   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.504278   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.504632   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.504814   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.506755   58840 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:46:59.504875   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.505278   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.508201   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:46:59.508219   58840 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:46:59.508238   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.509465   58840 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:59.510750   58840 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:59.510765   58840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:46:59.510778   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.511023   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.511444   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.511480   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.511707   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.511939   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.512117   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.512271   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.513612   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.513980   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.514009   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.514207   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.514354   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.514460   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.514564   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.523003   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I0520 11:46:59.523361   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.523835   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.523858   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.524115   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.524266   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.525489   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.525661   58840 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:59.525673   58840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:46:59.525685   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.527762   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.528064   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.528087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.528157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.528309   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.528414   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.528559   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.652795   58840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:59.672785   58840 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-668445" to be "Ready" ...
	I0520 11:46:59.759683   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:59.760767   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:59.775512   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:46:59.775536   58840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:46:59.807608   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:46:59.807633   58840 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:46:59.835597   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:59.835624   58840 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:46:59.862717   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:47:00.145002   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.145032   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.145352   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.145352   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.145372   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.145380   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.145389   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.145645   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.145700   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.145717   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.150992   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.151008   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.151358   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.151376   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.151385   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.802987   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803016   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803030   58840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.042235492s)
	I0520 11:47:00.803063   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803081   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803354   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.803386   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803404   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803422   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803424   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803434   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803439   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803449   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803464   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803621   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803631   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803642   58840 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-668445"
	I0520 11:47:00.803677   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.803729   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803743   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.805628   58840 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0520 11:46:58.478246   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:58.478733   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:58.478765   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:58.478670   59753 retry.go:31] will retry after 3.220647355s: waiting for machine to come up
	I0520 11:46:56.966878   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.966485   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.466861   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.967023   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.467329   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.966889   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.466799   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.966521   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:01.466922   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.806818   58840 addons.go:505] duration metric: took 1.344560138s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0520 11:47:01.675861   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.863455   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:01.863859   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:01.702053   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.702503   57519 main.go:141] libmachine: (no-preload-814599) Found IP for machine: 192.168.39.185
	I0520 11:47:01.702519   57519 main.go:141] libmachine: (no-preload-814599) Reserving static IP address...
	I0520 11:47:01.702532   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has current primary IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.702964   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.702994   57519 main.go:141] libmachine: (no-preload-814599) Reserved static IP address: 192.168.39.185
	I0520 11:47:01.703017   57519 main.go:141] libmachine: (no-preload-814599) DBG | skip adding static IP to network mk-no-preload-814599 - found existing host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"}
	I0520 11:47:01.703036   57519 main.go:141] libmachine: (no-preload-814599) DBG | Getting to WaitForSSH function...
	I0520 11:47:01.703048   57519 main.go:141] libmachine: (no-preload-814599) Waiting for SSH to be available...
	I0520 11:47:01.705097   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.705376   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.705409   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.705482   57519 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH client type: external
	I0520 11:47:01.705511   57519 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa (-rw-------)
	I0520 11:47:01.705543   57519 main.go:141] libmachine: (no-preload-814599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:47:01.705558   57519 main.go:141] libmachine: (no-preload-814599) DBG | About to run SSH command:
	I0520 11:47:01.705593   57519 main.go:141] libmachine: (no-preload-814599) DBG | exit 0
	I0520 11:47:01.833182   57519 main.go:141] libmachine: (no-preload-814599) DBG | SSH cmd err, output: <nil>: 
	I0520 11:47:01.833551   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetConfigRaw
	I0520 11:47:01.834149   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:01.836532   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.837008   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.837047   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.837310   57519 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json ...
	I0520 11:47:01.837509   57519 machine.go:94] provisionDockerMachine start ...
	I0520 11:47:01.837527   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:01.837717   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:01.839814   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.840151   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.840171   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.840335   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:01.840485   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.840631   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.840781   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:01.840958   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:01.841138   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:01.841149   57519 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:47:01.953190   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:47:01.953221   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:01.953480   57519 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:47:01.953504   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:01.953667   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:01.956155   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.956487   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.956518   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.956684   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:01.956884   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.957071   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.957239   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:01.957416   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:01.957580   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:01.957593   57519 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-814599 && echo "no-preload-814599" | sudo tee /etc/hostname
	I0520 11:47:02.079993   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-814599
	
	I0520 11:47:02.080018   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.082623   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.082950   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.082984   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.083154   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.083353   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.083520   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.083677   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.083852   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:02.084014   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:02.084029   57519 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-814599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-814599/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-814599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:47:02.204190   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:47:02.204225   57519 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:47:02.204252   57519 buildroot.go:174] setting up certificates
	I0520 11:47:02.204264   57519 provision.go:84] configureAuth start
	I0520 11:47:02.204280   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:02.204562   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:02.207132   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.207477   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.207501   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.207633   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.209905   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.210212   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.210231   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.210414   57519 provision.go:143] copyHostCerts
	I0520 11:47:02.210472   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:47:02.210493   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:47:02.210560   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:47:02.210743   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:47:02.210758   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:47:02.210795   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:47:02.210875   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:47:02.210883   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:47:02.210905   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:47:02.210962   57519 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.no-preload-814599 san=[127.0.0.1 192.168.39.185 localhost minikube no-preload-814599]
	I0520 11:47:02.411383   57519 provision.go:177] copyRemoteCerts
	I0520 11:47:02.411436   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:47:02.411458   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.414071   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.414443   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.414473   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.414675   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.414859   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.415006   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.415125   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:02.507148   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:47:02.536944   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0520 11:47:02.562291   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:47:02.589034   57519 provision.go:87] duration metric: took 384.752486ms to configureAuth
	I0520 11:47:02.589092   57519 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:47:02.589320   57519 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:47:02.589412   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.592287   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.592704   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.592731   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.592957   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.593180   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.593368   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.593561   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.593724   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:02.593910   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:02.593931   57519 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:47:02.899943   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:47:02.900037   57519 machine.go:97] duration metric: took 1.062511085s to provisionDockerMachine
	I0520 11:47:02.900074   57519 start.go:293] postStartSetup for "no-preload-814599" (driver="kvm2")
	I0520 11:47:02.900112   57519 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:47:02.900155   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:02.900546   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:47:02.900581   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.903643   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.904057   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.904089   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.904253   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.904433   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.904580   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.904710   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:02.999752   57519 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:47:03.005832   57519 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:47:03.005856   57519 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:47:03.005940   57519 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:47:03.006029   57519 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:47:03.006145   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:47:03.016853   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:47:03.049845   57519 start.go:296] duration metric: took 149.725ms for postStartSetup
	I0520 11:47:03.049886   57519 fix.go:56] duration metric: took 19.183954619s for fixHost
	I0520 11:47:03.049912   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.052841   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.053175   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.053203   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.053342   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.053537   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.053730   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.053899   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.054048   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:03.054227   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:03.054241   57519 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:47:03.166461   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205623.121199957
	
	I0520 11:47:03.166490   57519 fix.go:216] guest clock: 1716205623.121199957
	I0520 11:47:03.166500   57519 fix.go:229] Guest: 2024-05-20 11:47:03.121199957 +0000 UTC Remote: 2024-05-20 11:47:03.049891315 +0000 UTC m=+366.742654246 (delta=71.308642ms)
	I0520 11:47:03.166523   57519 fix.go:200] guest clock delta is within tolerance: 71.308642ms
	I0520 11:47:03.166530   57519 start.go:83] releasing machines lock for "no-preload-814599", held for 19.30063132s
	I0520 11:47:03.166554   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.166858   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:03.169486   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.169816   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.169861   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.169990   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170546   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170728   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170814   57519 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:47:03.170875   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.170920   57519 ssh_runner.go:195] Run: cat /version.json
	I0520 11:47:03.170944   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.173717   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.173944   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174128   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.174153   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174338   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.174365   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174371   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.174570   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.174571   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.174781   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.174793   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.174956   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.175049   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:03.175169   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	W0520 11:47:03.276456   57519 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:47:03.276537   57519 ssh_runner.go:195] Run: systemctl --version
	I0520 11:47:03.282946   57519 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:47:03.428948   57519 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:47:03.436462   57519 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:47:03.436529   57519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:47:03.453022   57519 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:47:03.453045   57519 start.go:494] detecting cgroup driver to use...
	I0520 11:47:03.453103   57519 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:47:03.469505   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:47:03.483617   57519 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:47:03.483673   57519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:47:03.498514   57519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:47:03.513169   57519 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:47:03.638355   57519 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:47:03.819407   57519 docker.go:233] disabling docker service ...
	I0520 11:47:03.819473   57519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:47:03.837928   57519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:47:03.853747   57519 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:47:03.976480   57519 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:47:04.097685   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:47:04.112917   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:47:04.132979   57519 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:47:04.133044   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.145501   57519 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:47:04.145594   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.157957   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.168774   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.180521   57519 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:47:04.191739   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.203232   57519 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.221386   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.231999   57519 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:47:04.243401   57519 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:47:04.243447   57519 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:47:04.259232   57519 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:47:04.270945   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:47:04.387908   57519 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:47:04.526507   57519 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:47:04.526588   57519 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:47:04.531568   57519 start.go:562] Will wait 60s for crictl version
	I0520 11:47:04.531628   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:04.535751   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:47:04.583384   57519 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:47:04.583470   57519 ssh_runner.go:195] Run: crio --version
	I0520 11:47:04.617545   57519 ssh_runner.go:195] Run: crio --version
	I0520 11:47:04.652685   57519 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:47:04.654199   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:04.657274   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:04.657624   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:04.657653   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:04.657858   57519 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 11:47:04.662344   57519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:47:04.676310   57519 kubeadm.go:877] updating cluster {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:47:04.676437   57519 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:47:04.676480   57519 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:47:04.723427   57519 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:47:04.723452   57519 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:47:04.723502   57519 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:04.723705   57519 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:04.723729   57519 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:04.723773   57519 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:04.723829   57519 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:04.723896   57519 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:04.723913   57519 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 11:47:04.723767   57519 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 11:47:04.725657   57519 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:04.725657   57519 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:04.725737   57519 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:04.725630   57519 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:04.725696   57519 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.849687   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.896118   57519 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0520 11:47:04.896166   57519 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.896214   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:04.901457   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.940481   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 11:47:04.940581   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.946097   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0520 11:47:04.946120   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.946168   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.958539   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:05.058882   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:05.059155   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:05.061672   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0520 11:47:05.082727   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:05.087840   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:05.606193   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:01.966435   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.467352   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.967335   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.467049   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.466715   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.966960   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.466641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.966993   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:06.466684   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.677231   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:47:06.176977   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:47:06.677184   58840 node_ready.go:49] node "default-k8s-diff-port-668445" has status "Ready":"True"
	I0520 11:47:06.677208   58840 node_ready.go:38] duration metric: took 7.004390787s for node "default-k8s-diff-port-668445" to be "Ready" ...
	I0520 11:47:06.677217   58840 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:47:06.684890   58840 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.690323   58840 pod_ready.go:92] pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.690345   58840 pod_ready.go:81] duration metric: took 5.420403ms for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.690354   58840 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.695944   58840 pod_ready.go:92] pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.695968   58840 pod_ready.go:81] duration metric: took 5.606311ms for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.695979   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.701932   58840 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.701953   58840 pod_ready.go:81] duration metric: took 5.965003ms for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.701966   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:04.363354   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:06.363448   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:08.125249   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1: (3.166660678s)
	I0520 11:47:08.125272   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (3.179080546s)
	I0520 11:47:08.125294   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0520 11:47:08.125314   57519 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0520 11:47:08.125343   57519 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:08.125346   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1: (3.066428315s)
	I0520 11:47:08.125383   57519 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0520 11:47:08.125398   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125415   57519 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:08.125415   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (3.066238572s)
	I0520 11:47:08.125472   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125517   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9: (3.063820229s)
	I0520 11:47:08.125478   57519 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0520 11:47:08.125582   57519 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:08.125596   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1: (3.04284016s)
	I0520 11:47:08.125612   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125637   57519 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0520 11:47:08.125659   57519 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:08.125668   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1: (3.037804525s)
	I0520 11:47:08.125696   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125707   57519 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0520 11:47:08.125709   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.519483497s)
	I0520 11:47:08.125727   57519 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:08.125758   57519 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0520 11:47:08.125775   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125808   57519 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:08.125843   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.140377   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:08.140435   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:08.140472   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:08.140518   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:08.140550   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:08.140604   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:08.277671   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 11:47:08.277785   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.286159   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 11:47:08.286163   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 11:47:08.286261   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:08.286297   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:08.287554   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0520 11:47:08.287623   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:08.300162   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 11:47:08.300183   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0520 11:47:08.300197   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.300203   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0520 11:47:08.300221   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0520 11:47:08.300235   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0520 11:47:08.300168   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 11:47:08.300245   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.300254   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:08.300315   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:08.304669   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0520 11:47:10.580290   57519 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.279947927s)
	I0520 11:47:10.580392   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0520 11:47:10.580421   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.280153203s)
	I0520 11:47:10.580444   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0520 11:47:10.580466   57519 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:10.580514   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:06.966995   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.467261   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.967171   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.467104   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.967112   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.466446   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.967351   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:10.467225   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:10.467292   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:10.507451   58316 cri.go:89] found id: ""
	I0520 11:47:10.507477   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.507487   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:10.507494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:10.507557   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:10.548291   58316 cri.go:89] found id: ""
	I0520 11:47:10.548317   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.548329   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:10.548337   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:10.548401   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:10.592870   58316 cri.go:89] found id: ""
	I0520 11:47:10.592899   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.592909   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:10.592916   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:10.592997   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:10.637344   58316 cri.go:89] found id: ""
	I0520 11:47:10.637376   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.637387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:10.637395   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:10.637466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:10.679759   58316 cri.go:89] found id: ""
	I0520 11:47:10.679794   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.679802   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:10.679807   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:10.679868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:10.720179   58316 cri.go:89] found id: ""
	I0520 11:47:10.720215   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.720250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:10.720259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:10.720324   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:10.761279   58316 cri.go:89] found id: ""
	I0520 11:47:10.761308   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.761320   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:10.761327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:10.761383   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:10.799008   58316 cri.go:89] found id: ""
	I0520 11:47:10.799037   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.799048   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:10.799059   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:10.799072   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:10.850507   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:10.850537   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:10.867287   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:10.867317   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:11.007373   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:11.007401   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:11.007420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:11.090037   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:11.090074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:07.717364   58840 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:07.717391   58840 pod_ready.go:81] duration metric: took 1.01541621s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:07.717404   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:08.041632   58840 pod_ready.go:92] pod "kube-proxy-7ddtk" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:08.041657   58840 pod_ready.go:81] duration metric: took 324.24633ms for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:08.041669   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:10.047877   58840 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:08.364366   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:10.366662   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:12.895005   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:11.650206   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.069627914s)
	I0520 11:47:11.650241   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 11:47:11.650265   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:11.650322   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:13.011174   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.360821671s)
	I0520 11:47:13.011203   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0520 11:47:13.011225   57519 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:13.011270   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:13.633150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:13.650425   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:13.650494   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:13.690842   58316 cri.go:89] found id: ""
	I0520 11:47:13.690869   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.690880   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:13.690887   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:13.690947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:13.727462   58316 cri.go:89] found id: ""
	I0520 11:47:13.727492   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.727504   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:13.727511   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:13.727570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:13.765767   58316 cri.go:89] found id: ""
	I0520 11:47:13.765797   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.765807   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:13.765816   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:13.765877   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:13.800210   58316 cri.go:89] found id: ""
	I0520 11:47:13.800239   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.800248   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:13.800255   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:13.800316   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:13.836481   58316 cri.go:89] found id: ""
	I0520 11:47:13.836513   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.836525   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:13.836533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:13.836601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:13.881888   58316 cri.go:89] found id: ""
	I0520 11:47:13.881914   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.881923   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:13.881932   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:13.881990   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:13.916896   58316 cri.go:89] found id: ""
	I0520 11:47:13.916955   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.916966   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:13.916973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:13.917037   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:13.950345   58316 cri.go:89] found id: ""
	I0520 11:47:13.950372   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.950381   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:13.950391   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:13.950415   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:14.039076   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:14.039114   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:14.086767   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:14.086794   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:14.140377   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:14.140418   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:14.158036   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:14.158074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:14.255195   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:12.048869   58840 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:12.048894   58840 pod_ready.go:81] duration metric: took 4.007218109s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:12.048903   58840 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:14.057711   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:16.555471   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:15.365437   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:17.864290   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:16.885885   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.874589428s)
	I0520 11:47:16.885917   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0520 11:47:16.885952   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:16.886034   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:18.674100   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.788037677s)
	I0520 11:47:18.674128   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0520 11:47:18.674149   57519 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:18.674197   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:20.543671   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.86944462s)
	I0520 11:47:20.543705   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0520 11:47:20.543733   57519 cache_images.go:123] Successfully loaded all cached images
	I0520 11:47:20.543739   57519 cache_images.go:92] duration metric: took 15.820273224s to LoadCachedImages
	I0520 11:47:20.543749   57519 kubeadm.go:928] updating node { 192.168.39.185 8443 v1.30.1 crio true true} ...
	I0520 11:47:20.543863   57519 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-814599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:47:20.543944   57519 ssh_runner.go:195] Run: crio config
	I0520 11:47:20.593647   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:47:20.593672   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:47:20.593692   57519 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:47:20.593719   57519 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-814599 NodeName:no-preload-814599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:47:20.593863   57519 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-814599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:47:20.593936   57519 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:47:20.604739   57519 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:47:20.604791   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:47:20.614029   57519 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 11:47:20.630587   57519 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:47:20.647296   57519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0520 11:47:20.664453   57519 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0520 11:47:20.668296   57519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:47:20.681086   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:47:20.816542   57519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:47:20.834344   57519 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599 for IP: 192.168.39.185
	I0520 11:47:20.834369   57519 certs.go:194] generating shared ca certs ...
	I0520 11:47:20.834388   57519 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:47:20.834578   57519 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:47:20.834634   57519 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:47:20.834649   57519 certs.go:256] generating profile certs ...
	I0520 11:47:20.834777   57519 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.key
	I0520 11:47:20.834876   57519 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5
	I0520 11:47:20.834928   57519 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key
	I0520 11:47:20.835090   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:47:20.835146   57519 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:47:20.835159   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:47:20.835192   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:47:20.835226   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:47:20.835255   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:47:20.835314   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:47:20.836325   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:47:20.883227   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:47:20.918566   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:47:20.959097   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:47:21.006422   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 11:47:21.035717   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:47:21.071087   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:47:21.094330   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:47:21.119597   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:47:21.145488   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:47:21.173152   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:47:21.197627   57519 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:47:21.214595   57519 ssh_runner.go:195] Run: openssl version
	I0520 11:47:21.220678   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:47:21.232251   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.236953   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.237015   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.243048   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:47:21.253667   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:47:21.264223   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.268607   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.268648   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.274112   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:47:21.284427   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:47:21.294811   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.299403   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.299455   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.304954   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:47:21.315320   57519 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:47:21.319956   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:47:21.325692   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:47:21.331232   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:47:21.337240   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:47:16.755906   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:16.772659   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:16.772736   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:16.821927   58316 cri.go:89] found id: ""
	I0520 11:47:16.821954   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.821964   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:16.821971   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:16.822033   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:16.863636   58316 cri.go:89] found id: ""
	I0520 11:47:16.863663   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.863673   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:16.863681   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:16.863749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:16.900524   58316 cri.go:89] found id: ""
	I0520 11:47:16.900553   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.900561   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:16.900566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:16.900625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:16.949484   58316 cri.go:89] found id: ""
	I0520 11:47:16.949517   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.949527   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:16.949535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:16.949597   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:16.994006   58316 cri.go:89] found id: ""
	I0520 11:47:16.994035   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.994045   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:16.994054   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:16.994111   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:17.030100   58316 cri.go:89] found id: ""
	I0520 11:47:17.030130   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.030141   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:17.030147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:17.030195   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:17.069067   58316 cri.go:89] found id: ""
	I0520 11:47:17.069089   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.069097   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:17.069102   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:17.069146   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:17.105102   58316 cri.go:89] found id: ""
	I0520 11:47:17.105131   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.105141   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:17.105151   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:17.105165   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:17.159912   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:17.159937   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:17.212192   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:17.212227   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:17.227761   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:17.227802   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:17.305724   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:17.305753   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:17.305770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:19.886189   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:19.900397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:19.900475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:19.940473   58316 cri.go:89] found id: ""
	I0520 11:47:19.940502   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.940509   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:19.940533   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:19.940582   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:19.985561   58316 cri.go:89] found id: ""
	I0520 11:47:19.985594   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.985605   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:19.985611   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:19.985676   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:20.040865   58316 cri.go:89] found id: ""
	I0520 11:47:20.040895   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.040906   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:20.040913   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:20.040991   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:20.086122   58316 cri.go:89] found id: ""
	I0520 11:47:20.086149   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.086159   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:20.086167   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:20.086228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:20.122905   58316 cri.go:89] found id: ""
	I0520 11:47:20.122931   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.122938   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:20.122943   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:20.123055   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:20.165966   58316 cri.go:89] found id: ""
	I0520 11:47:20.165993   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.166004   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:20.166012   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:20.166064   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:20.206010   58316 cri.go:89] found id: ""
	I0520 11:47:20.206043   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.206054   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:20.206062   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:20.206126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:20.243540   58316 cri.go:89] found id: ""
	I0520 11:47:20.243565   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.243573   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:20.243582   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:20.243595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:20.327278   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:20.327298   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:20.327313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:20.423064   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:20.423096   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:20.467707   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:20.467742   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:20.522746   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:20.522782   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:18.556589   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:20.557476   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:20.363536   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:22.862952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:21.342881   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:47:21.349172   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:47:21.354721   57519 kubeadm.go:391] StartCluster: {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:47:21.354839   57519 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:47:21.354923   57519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:47:21.399473   57519 cri.go:89] found id: ""
	I0520 11:47:21.399554   57519 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:47:21.413632   57519 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:47:21.413656   57519 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:47:21.413662   57519 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:47:21.413712   57519 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:47:21.425556   57519 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:47:21.426515   57519 kubeconfig.go:125] found "no-preload-814599" server: "https://192.168.39.185:8443"
	I0520 11:47:21.428556   57519 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:47:21.438628   57519 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.185
	I0520 11:47:21.438655   57519 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:47:21.438668   57519 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:47:21.438708   57519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:47:21.485080   57519 cri.go:89] found id: ""
	I0520 11:47:21.485149   57519 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:47:21.502591   57519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:47:21.512148   57519 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:47:21.512168   57519 kubeadm.go:156] found existing configuration files:
	
	I0520 11:47:21.512208   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:47:21.521378   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:47:21.521434   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:47:21.531568   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:47:21.541404   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:47:21.541459   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:47:21.552018   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:47:21.562667   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:47:21.562711   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:47:21.572240   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:47:21.581472   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:47:21.581535   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:47:21.591072   57519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:47:21.601114   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:21.716435   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.597273   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.792224   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.866757   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.999294   57519 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:47:22.999389   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.499642   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:24.000424   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:24.022646   57519 api_server.go:72] duration metric: took 1.023349447s to wait for apiserver process to appear ...
	I0520 11:47:24.022673   57519 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:47:24.022691   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:24.023162   57519 api_server.go:269] stopped: https://192.168.39.185:8443/healthz: Get "https://192.168.39.185:8443/healthz": dial tcp 192.168.39.185:8443: connect: connection refused
	I0520 11:47:24.523760   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:23.037165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.051438   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:23.051490   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:23.088156   58316 cri.go:89] found id: ""
	I0520 11:47:23.088186   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.088197   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:23.088204   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:23.088266   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:23.123357   58316 cri.go:89] found id: ""
	I0520 11:47:23.123384   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.123394   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:23.123401   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:23.123461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:23.159579   58316 cri.go:89] found id: ""
	I0520 11:47:23.159612   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.159622   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:23.159630   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:23.159694   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:23.195471   58316 cri.go:89] found id: ""
	I0520 11:47:23.195507   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.195518   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:23.195525   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:23.195584   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:23.231121   58316 cri.go:89] found id: ""
	I0520 11:47:23.231148   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.231159   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:23.231165   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:23.231225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:23.267809   58316 cri.go:89] found id: ""
	I0520 11:47:23.267833   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.267841   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:23.267847   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:23.267897   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:23.306589   58316 cri.go:89] found id: ""
	I0520 11:47:23.306608   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.306615   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:23.306621   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:23.306678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:23.339285   58316 cri.go:89] found id: ""
	I0520 11:47:23.339308   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.339317   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:23.339329   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:23.339343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:23.406157   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:23.406196   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:23.420874   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:23.420911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:23.503049   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:23.503077   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:23.503092   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:23.585189   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:23.585222   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:26.130594   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:26.146278   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:26.146366   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:26.189721   58316 cri.go:89] found id: ""
	I0520 11:47:26.189747   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.189754   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:26.189760   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:26.189809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:26.228060   58316 cri.go:89] found id: ""
	I0520 11:47:26.228084   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.228098   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:26.228104   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:26.228171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:26.262360   58316 cri.go:89] found id: ""
	I0520 11:47:26.262390   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.262400   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:26.262407   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:26.262471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:26.297981   58316 cri.go:89] found id: ""
	I0520 11:47:26.298010   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.298022   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:26.298029   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:26.298094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:26.339873   58316 cri.go:89] found id: ""
	I0520 11:47:26.339901   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.339911   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:26.339918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:26.339983   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:26.376366   58316 cri.go:89] found id: ""
	I0520 11:47:26.376393   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.376404   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:26.376411   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:26.376471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:26.410803   58316 cri.go:89] found id: ""
	I0520 11:47:26.410839   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.410851   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:26.410859   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:26.410921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:26.445392   58316 cri.go:89] found id: ""
	I0520 11:47:26.445420   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.445432   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:26.445443   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:26.445458   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:26.499244   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:26.499276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:26.513224   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:26.513259   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:26.589623   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:26.589647   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:26.589661   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:26.676875   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:26.676918   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:23.055887   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:25.555044   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:24.863476   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:27.363148   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:27.158548   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:47:27.158572   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:47:27.158585   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:27.205830   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:47:27.205867   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:47:27.523283   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:27.527618   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:27.527643   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:28.023238   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:28.029601   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:28.029630   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:28.522849   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:28.528350   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:28.528372   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:29.023422   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:29.027460   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:29.027493   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:29.523739   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:29.528289   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:29.528317   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:30.022859   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:30.027324   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:30.027348   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:30.522929   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:30.527596   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:47:30.534571   57519 api_server.go:141] control plane version: v1.30.1
	I0520 11:47:30.534592   57519 api_server.go:131] duration metric: took 6.511912744s to wait for apiserver health ...
	I0520 11:47:30.534601   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:47:30.534609   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:47:30.536647   57519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:47:30.537896   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:47:30.548779   57519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:47:30.568124   57519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:47:30.577489   57519 system_pods.go:59] 8 kube-system pods found
	I0520 11:47:30.577522   57519 system_pods.go:61] "coredns-7db6d8ff4d-tg5bl" [ec723a9c-933c-4d25-9b56-e39a78adbbb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:47:30.577530   57519 system_pods.go:61] "etcd-no-preload-814599" [d0e5ca15-cc81-4dc2-9bc4-0e9b88c7b49d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:47:30.577538   57519 system_pods.go:61] "kube-apiserver-no-preload-814599" [81f50182-5190-4168-b97a-f5a7a6348247] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:47:30.577544   57519 system_pods.go:61] "kube-controller-manager-no-preload-814599" [871504b1-6e24-498b-9563-241b7a7422d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:47:30.577550   57519 system_pods.go:61] "kube-proxy-mqg49" [0f3b3b30-9789-4a0b-b636-e6db16e1ce0d] Running
	I0520 11:47:30.577560   57519 system_pods.go:61] "kube-scheduler-no-preload-814599" [028942b5-5187-4b86-84ba-eba2cdb7a844] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:47:30.577567   57519 system_pods.go:61] "metrics-server-569cc877fc-v2zh9" [1be844eb-66ba-4a82-86fb-6f53de7e4d33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:47:30.577577   57519 system_pods.go:61] "storage-provisioner" [84c706cd-0245-4afa-bb7b-4644bd2784bd] Running
	I0520 11:47:30.577586   57519 system_pods.go:74] duration metric: took 9.439264ms to wait for pod list to return data ...
	I0520 11:47:30.577598   57519 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:47:30.581346   57519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:47:30.581367   57519 node_conditions.go:123] node cpu capacity is 2
	I0520 11:47:30.581377   57519 node_conditions.go:105] duration metric: took 3.772205ms to run NodePressure ...
	I0520 11:47:30.581398   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:30.849703   57519 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:47:30.853895   57519 kubeadm.go:733] kubelet initialised
	I0520 11:47:30.853917   57519 kubeadm.go:734] duration metric: took 4.191303ms waiting for restarted kubelet to initialise ...
	I0520 11:47:30.853925   57519 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:47:30.861807   57519 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.866768   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.866797   57519 pod_ready.go:81] duration metric: took 4.964994ms for pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.866809   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.866819   57519 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.871001   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "etcd-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.871027   57519 pod_ready.go:81] duration metric: took 4.200278ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.871038   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "etcd-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.871046   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.875615   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "kube-apiserver-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.875635   57519 pod_ready.go:81] duration metric: took 4.580378ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.875645   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "kube-apiserver-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.875655   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.971186   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.971210   57519 pod_ready.go:81] duration metric: took 95.545422ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.971219   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.971226   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:29.250626   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:29.265658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:29.265716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:29.307618   58316 cri.go:89] found id: ""
	I0520 11:47:29.307640   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.307651   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:29.307658   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:29.307717   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:29.351325   58316 cri.go:89] found id: ""
	I0520 11:47:29.351353   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.351364   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:29.351372   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:29.351444   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:29.392415   58316 cri.go:89] found id: ""
	I0520 11:47:29.392439   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.392449   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:29.392455   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:29.392501   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:29.435195   58316 cri.go:89] found id: ""
	I0520 11:47:29.435220   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.435227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:29.435233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:29.435287   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:29.476746   58316 cri.go:89] found id: ""
	I0520 11:47:29.476773   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.476784   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:29.476790   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:29.476875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:29.511513   58316 cri.go:89] found id: ""
	I0520 11:47:29.511545   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.511553   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:29.511559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:29.511621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:29.547563   58316 cri.go:89] found id: ""
	I0520 11:47:29.547583   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.547590   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:29.547598   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:29.547644   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:29.586438   58316 cri.go:89] found id: ""
	I0520 11:47:29.586464   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.586472   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:29.586480   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:29.586491   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:29.660795   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:29.660828   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:29.699538   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:29.699568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:29.751675   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:29.751711   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:29.766924   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:29.766960   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:29.840738   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:27.555356   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:29.556470   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:29.862807   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:32.364158   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:31.372203   57519 pod_ready.go:92] pod "kube-proxy-mqg49" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:31.372223   57519 pod_ready.go:81] duration metric: took 400.985089ms for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:31.372231   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:33.379905   57519 pod_ready.go:102] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:35.378764   57519 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:35.378793   57519 pod_ready.go:81] duration metric: took 4.006551156s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:35.378804   57519 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:32.341347   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:32.355727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:32.355784   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:32.399703   58316 cri.go:89] found id: ""
	I0520 11:47:32.399740   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.399749   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:32.399754   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:32.399813   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:32.440321   58316 cri.go:89] found id: ""
	I0520 11:47:32.440350   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.440362   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:32.440367   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:32.440427   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:32.477813   58316 cri.go:89] found id: ""
	I0520 11:47:32.477840   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.477848   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:32.477853   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:32.477899   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:32.524465   58316 cri.go:89] found id: ""
	I0520 11:47:32.524492   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.524507   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:32.524516   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:32.524574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:32.559287   58316 cri.go:89] found id: ""
	I0520 11:47:32.559316   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.559326   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:32.559334   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:32.559390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:32.595977   58316 cri.go:89] found id: ""
	I0520 11:47:32.596010   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.596022   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:32.596030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:32.596088   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:32.631700   58316 cri.go:89] found id: ""
	I0520 11:47:32.631730   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.631739   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:32.631746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:32.631818   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:32.668027   58316 cri.go:89] found id: ""
	I0520 11:47:32.668062   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.668074   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:32.668084   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:32.668097   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:32.719910   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:32.719940   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:32.733988   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:32.734011   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:32.805852   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:32.805871   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:32.805886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:32.884785   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:32.884814   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:35.425234   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:35.438512   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:35.438583   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:35.471964   58316 cri.go:89] found id: ""
	I0520 11:47:35.471989   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.471997   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:35.472003   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:35.472058   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:35.505663   58316 cri.go:89] found id: ""
	I0520 11:47:35.505695   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.505706   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:35.505714   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:35.505766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:35.540407   58316 cri.go:89] found id: ""
	I0520 11:47:35.540437   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.540447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:35.540454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:35.540516   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:35.576599   58316 cri.go:89] found id: ""
	I0520 11:47:35.576622   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.576629   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:35.576634   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:35.576689   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:35.610257   58316 cri.go:89] found id: ""
	I0520 11:47:35.610286   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.610297   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:35.610304   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:35.610367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:35.647305   58316 cri.go:89] found id: ""
	I0520 11:47:35.647330   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.647338   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:35.647344   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:35.647393   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:35.682149   58316 cri.go:89] found id: ""
	I0520 11:47:35.682180   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.682191   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:35.682198   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:35.682260   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:35.717536   58316 cri.go:89] found id: ""
	I0520 11:47:35.717561   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.717569   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:35.717576   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:35.717588   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:35.768734   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:35.768763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:35.783246   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:35.783276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:35.869299   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:35.869325   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:35.869343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:35.948083   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:35.948116   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:32.056046   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:34.555606   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:36.556231   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:34.862916   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:37.363328   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:37.385861   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:39.884432   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:38.498413   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:38.512549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:38.512627   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:38.545580   58316 cri.go:89] found id: ""
	I0520 11:47:38.545601   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.545608   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:38.545614   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:38.545660   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.583115   58316 cri.go:89] found id: ""
	I0520 11:47:38.583145   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.583156   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:38.583163   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:38.583210   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:38.619548   58316 cri.go:89] found id: ""
	I0520 11:47:38.619578   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.619588   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:38.619596   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:38.619664   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:38.664240   58316 cri.go:89] found id: ""
	I0520 11:47:38.664272   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.664282   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:38.664290   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:38.664347   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:38.703438   58316 cri.go:89] found id: ""
	I0520 11:47:38.703460   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.703468   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:38.703472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:38.703546   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:38.740712   58316 cri.go:89] found id: ""
	I0520 11:47:38.740742   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.740754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:38.740761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:38.740821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:38.785274   58316 cri.go:89] found id: ""
	I0520 11:47:38.785296   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.785305   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:38.785312   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:38.785370   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:38.827757   58316 cri.go:89] found id: ""
	I0520 11:47:38.827787   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.827798   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:38.827810   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:38.827825   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:38.880074   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:38.880105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:38.894643   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:38.894688   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:38.982821   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:38.982849   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:38.982872   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:39.066426   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:39.066457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:41.610439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:41.623306   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:41.623374   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:41.662703   58316 cri.go:89] found id: ""
	I0520 11:47:41.662731   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.662740   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:41.662747   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:41.662803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.557813   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.054616   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:39.863318   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:42.362965   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.893841   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:44.387666   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.699406   58316 cri.go:89] found id: ""
	I0520 11:47:41.699433   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.699443   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:41.699449   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:41.699493   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:41.733448   58316 cri.go:89] found id: ""
	I0520 11:47:41.733475   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.733485   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:41.733492   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:41.733556   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:41.782101   58316 cri.go:89] found id: ""
	I0520 11:47:41.782131   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.782142   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:41.782149   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:41.782213   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:41.828303   58316 cri.go:89] found id: ""
	I0520 11:47:41.828332   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.828343   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:41.828350   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:41.828438   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:41.865690   58316 cri.go:89] found id: ""
	I0520 11:47:41.865712   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.865721   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:41.865726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:41.865773   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:41.907954   58316 cri.go:89] found id: ""
	I0520 11:47:41.907982   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.907992   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:41.907999   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:41.908060   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:41.946853   58316 cri.go:89] found id: ""
	I0520 11:47:41.946879   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.946889   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:41.946901   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:41.946916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:42.001481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:42.001516   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:42.015383   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:42.015412   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:42.088155   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:42.088184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:42.088201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:42.172771   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:42.172801   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:44.714665   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:44.727782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:44.727866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:44.760377   58316 cri.go:89] found id: ""
	I0520 11:47:44.760405   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.760412   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:44.760418   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:44.760466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:44.793488   58316 cri.go:89] found id: ""
	I0520 11:47:44.793509   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.793517   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:44.793522   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:44.793570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:44.827752   58316 cri.go:89] found id: ""
	I0520 11:47:44.827777   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.827786   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:44.827792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:44.827840   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:44.864415   58316 cri.go:89] found id: ""
	I0520 11:47:44.864437   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.864446   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:44.864454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:44.864514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:44.904360   58316 cri.go:89] found id: ""
	I0520 11:47:44.904382   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.904391   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:44.904396   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:44.904452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:44.939969   58316 cri.go:89] found id: ""
	I0520 11:47:44.940000   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.940010   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:44.940019   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:44.940069   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:44.979521   58316 cri.go:89] found id: ""
	I0520 11:47:44.979544   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.979552   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:44.979557   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:44.979621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:45.012946   58316 cri.go:89] found id: ""
	I0520 11:47:45.012979   58316 logs.go:276] 0 containers: []
	W0520 11:47:45.012992   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:45.013002   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:45.013015   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:45.067582   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:45.067619   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:45.081497   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:45.081529   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:45.155312   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:45.155335   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:45.155351   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:45.232403   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:45.232439   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:43.055052   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:45.056688   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:44.863217   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:47.362713   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:46.886365   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.384568   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:47.774002   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:47.789020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:47.789078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:47.826284   58316 cri.go:89] found id: ""
	I0520 11:47:47.826319   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.826331   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:47.826340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:47.826404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:47.866588   58316 cri.go:89] found id: ""
	I0520 11:47:47.866610   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.866616   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:47.866621   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:47.866675   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:47.901556   58316 cri.go:89] found id: ""
	I0520 11:47:47.901582   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.901593   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:47.901600   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:47.901658   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:47.937563   58316 cri.go:89] found id: ""
	I0520 11:47:47.937586   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.937593   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:47.937599   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:47.937656   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:47.975493   58316 cri.go:89] found id: ""
	I0520 11:47:47.975517   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.975527   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:47.975535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:47.975594   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:48.012208   58316 cri.go:89] found id: ""
	I0520 11:47:48.012232   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.012239   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:48.012245   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:48.012303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:48.046951   58316 cri.go:89] found id: ""
	I0520 11:47:48.046979   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.046989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:48.046996   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:48.047057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:48.085611   58316 cri.go:89] found id: ""
	I0520 11:47:48.085639   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.085649   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:48.085659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:48.085673   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:48.159338   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:48.159359   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:48.159373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:48.238332   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:48.238362   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:48.282316   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:48.282346   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:48.333780   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:48.333811   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:50.849007   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:50.863710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:50.863766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:50.900260   58316 cri.go:89] found id: ""
	I0520 11:47:50.900281   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.900287   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:50.900294   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:50.900342   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:50.934796   58316 cri.go:89] found id: ""
	I0520 11:47:50.934817   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.934824   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:50.934830   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:50.934875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:50.972957   58316 cri.go:89] found id: ""
	I0520 11:47:50.972983   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.972992   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:50.973000   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:50.973073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:51.009354   58316 cri.go:89] found id: ""
	I0520 11:47:51.009392   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.009402   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:51.009410   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:51.009467   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:51.046268   58316 cri.go:89] found id: ""
	I0520 11:47:51.046297   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.046308   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:51.046316   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:51.046379   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:51.083513   58316 cri.go:89] found id: ""
	I0520 11:47:51.083538   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.083548   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:51.083554   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:51.083619   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:51.126736   58316 cri.go:89] found id: ""
	I0520 11:47:51.126761   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.126778   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:51.126784   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:51.126841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:51.166149   58316 cri.go:89] found id: ""
	I0520 11:47:51.166176   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.166187   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:51.166196   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:51.166207   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:51.218946   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:51.218989   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:51.233779   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:51.233813   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:51.312266   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:51.312289   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:51.312306   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:51.393884   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:51.393911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:47.056743   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.555489   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.363267   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:51.863214   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:51.385469   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:53.884960   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:55.885430   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:53.939605   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:53.952518   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:53.952585   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:53.995262   58316 cri.go:89] found id: ""
	I0520 11:47:53.995291   58316 logs.go:276] 0 containers: []
	W0520 11:47:53.995302   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:53.995309   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:53.995372   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:54.029357   58316 cri.go:89] found id: ""
	I0520 11:47:54.029379   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.029387   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:54.029393   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:54.029441   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:54.064902   58316 cri.go:89] found id: ""
	I0520 11:47:54.064937   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.064947   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:54.064954   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:54.065014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:54.102763   58316 cri.go:89] found id: ""
	I0520 11:47:54.102788   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.102795   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:54.102801   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:54.102866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:54.136345   58316 cri.go:89] found id: ""
	I0520 11:47:54.136373   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.136384   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:54.136392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:54.136456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:54.173106   58316 cri.go:89] found id: ""
	I0520 11:47:54.173135   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.173144   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:54.173151   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:54.173214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:54.209199   58316 cri.go:89] found id: ""
	I0520 11:47:54.209222   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.209229   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:54.209235   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:54.209301   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:54.247681   58316 cri.go:89] found id: ""
	I0520 11:47:54.247711   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.247722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:54.247731   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:54.247749   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:54.325604   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:54.325623   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:54.325635   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:54.405791   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:54.405832   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:54.445633   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:54.445671   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:54.495395   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:54.495424   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:52.055220   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:54.059922   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:56.554742   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:54.362355   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:56.862720   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:57.886732   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:00.387613   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:57.011083   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:57.025109   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:57.025171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:57.059268   58316 cri.go:89] found id: ""
	I0520 11:47:57.059299   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.059310   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:57.059318   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:57.059371   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:57.093744   58316 cri.go:89] found id: ""
	I0520 11:47:57.093770   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.093780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:57.093788   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:57.093854   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:57.128836   58316 cri.go:89] found id: ""
	I0520 11:47:57.128868   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.128879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:57.128886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:57.128967   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:57.163858   58316 cri.go:89] found id: ""
	I0520 11:47:57.163882   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.163890   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:57.163896   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:57.163951   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:57.198684   58316 cri.go:89] found id: ""
	I0520 11:47:57.198713   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.198724   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:57.198732   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:57.198794   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:57.235121   58316 cri.go:89] found id: ""
	I0520 11:47:57.235147   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.235155   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:57.235161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:57.235208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:57.279621   58316 cri.go:89] found id: ""
	I0520 11:47:57.279641   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.279650   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:57.279655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:57.279701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:57.316292   58316 cri.go:89] found id: ""
	I0520 11:47:57.316319   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.316329   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:57.316338   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:57.316350   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:57.330013   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:57.330037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:57.405736   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:57.405762   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:57.405776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:57.483126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:57.483160   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:57.527345   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:57.527375   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.081615   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:00.094782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:00.094841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:00.133435   58316 cri.go:89] found id: ""
	I0520 11:48:00.133470   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.133486   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:00.133492   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:00.133553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:00.169402   58316 cri.go:89] found id: ""
	I0520 11:48:00.169434   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.169445   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:00.169453   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:00.169515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:00.204702   58316 cri.go:89] found id: ""
	I0520 11:48:00.204728   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.204738   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:00.204745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:00.204803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:00.246341   58316 cri.go:89] found id: ""
	I0520 11:48:00.246373   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.246383   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:00.246391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:00.246452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:00.279937   58316 cri.go:89] found id: ""
	I0520 11:48:00.279961   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.279968   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:00.279973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:00.280021   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:00.314825   58316 cri.go:89] found id: ""
	I0520 11:48:00.314853   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.314864   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:00.314870   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:00.314937   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:00.354431   58316 cri.go:89] found id: ""
	I0520 11:48:00.354460   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.354471   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:00.354478   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:00.354538   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:00.391309   58316 cri.go:89] found id: ""
	I0520 11:48:00.391335   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.391342   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:00.391350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:00.391361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:00.471502   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:00.471535   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:00.508324   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:00.508352   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.559695   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:00.559728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:00.573677   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:00.573709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:00.648102   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:58.558508   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:01.055219   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:58.862939   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:01.362135   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:02.886809   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:04.887274   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:03.148336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:03.162792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:03.162922   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:03.202620   58316 cri.go:89] found id: ""
	I0520 11:48:03.202656   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.202667   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:03.202674   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:03.202732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:03.243434   58316 cri.go:89] found id: ""
	I0520 11:48:03.243460   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.243468   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:03.243474   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:03.243528   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:03.278517   58316 cri.go:89] found id: ""
	I0520 11:48:03.278545   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.278553   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:03.278560   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:03.278624   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:03.317253   58316 cri.go:89] found id: ""
	I0520 11:48:03.317281   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.317291   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:03.317298   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:03.317360   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:03.352394   58316 cri.go:89] found id: ""
	I0520 11:48:03.352416   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.352425   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:03.352431   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:03.352480   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:03.393016   58316 cri.go:89] found id: ""
	I0520 11:48:03.393044   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.393055   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:03.393063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:03.393134   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:03.434436   58316 cri.go:89] found id: ""
	I0520 11:48:03.434461   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.434468   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:03.434474   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:03.434531   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:03.476703   58316 cri.go:89] found id: ""
	I0520 11:48:03.476728   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.476737   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:03.476746   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:03.476761   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:03.529959   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:03.529996   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:03.544741   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:03.544770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:03.614526   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:03.614552   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:03.614568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:03.695336   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:03.695368   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:06.233916   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:06.249026   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:06.249083   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:06.288191   58316 cri.go:89] found id: ""
	I0520 11:48:06.288223   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.288233   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:06.288246   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:06.288307   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:06.328602   58316 cri.go:89] found id: ""
	I0520 11:48:06.328639   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.328647   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:06.328653   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:06.328710   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:06.373030   58316 cri.go:89] found id: ""
	I0520 11:48:06.373053   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.373062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:06.373069   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:06.373130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:06.410520   58316 cri.go:89] found id: ""
	I0520 11:48:06.410542   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.410551   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:06.410559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:06.410625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:06.448468   58316 cri.go:89] found id: ""
	I0520 11:48:06.448493   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.448505   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:06.448511   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:06.448571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:06.507441   58316 cri.go:89] found id: ""
	I0520 11:48:06.507468   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.507477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:06.507485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:06.507545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:06.547174   58316 cri.go:89] found id: ""
	I0520 11:48:06.547201   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.547212   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:06.547220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:06.547283   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:06.580692   58316 cri.go:89] found id: ""
	I0520 11:48:06.580715   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.580722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:06.580730   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:06.580741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:06.634822   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:06.634855   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:06.649107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:06.649132   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:48:03.555849   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:06.059026   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:03.363379   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:05.863603   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:07.386458   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:09.884793   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	W0520 11:48:06.717523   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:06.717548   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:06.717562   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:06.806538   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:06.806571   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:09.347195   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:09.362597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:09.362677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:09.401314   58316 cri.go:89] found id: ""
	I0520 11:48:09.401338   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.401346   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:09.401353   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:09.401411   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:09.436394   58316 cri.go:89] found id: ""
	I0520 11:48:09.436418   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.436426   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:09.436431   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:09.436484   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:09.470407   58316 cri.go:89] found id: ""
	I0520 11:48:09.470437   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.470447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:09.470462   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:09.470514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:09.507720   58316 cri.go:89] found id: ""
	I0520 11:48:09.507747   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.507758   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:09.507763   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:09.507824   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:09.548056   58316 cri.go:89] found id: ""
	I0520 11:48:09.548084   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.548093   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:09.548106   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:09.548172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:09.585187   58316 cri.go:89] found id: ""
	I0520 11:48:09.585214   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.585222   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:09.585227   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:09.585285   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:09.622695   58316 cri.go:89] found id: ""
	I0520 11:48:09.622718   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.622725   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:09.622731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:09.622796   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:09.662751   58316 cri.go:89] found id: ""
	I0520 11:48:09.662775   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.662784   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:09.662792   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:09.662804   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:09.715669   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:09.715709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:09.729301   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:09.729333   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:09.799189   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:09.799213   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:09.799228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:09.885889   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:09.885916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:08.554935   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:10.555580   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:08.362397   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:10.863960   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:12.385193   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:14.385749   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:12.431556   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:12.445470   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:12.445545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:12.482161   58316 cri.go:89] found id: ""
	I0520 11:48:12.482186   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.482195   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:12.482201   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:12.482251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:12.520051   58316 cri.go:89] found id: ""
	I0520 11:48:12.520077   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.520090   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:12.520096   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:12.520143   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:12.559856   58316 cri.go:89] found id: ""
	I0520 11:48:12.559878   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.559886   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:12.559891   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:12.559947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:12.599657   58316 cri.go:89] found id: ""
	I0520 11:48:12.599679   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.599687   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:12.599693   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:12.599739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:12.640514   58316 cri.go:89] found id: ""
	I0520 11:48:12.640539   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.640547   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:12.640552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:12.640601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:12.680337   58316 cri.go:89] found id: ""
	I0520 11:48:12.680367   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.680378   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:12.680384   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:12.680433   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:12.718580   58316 cri.go:89] found id: ""
	I0520 11:48:12.718615   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.718624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:12.718629   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:12.718677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:12.760438   58316 cri.go:89] found id: ""
	I0520 11:48:12.760463   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.760471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:12.760479   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:12.760493   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:12.816564   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:12.816599   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:12.831107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:12.831130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:12.931165   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:12.931184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:12.931199   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:13.035436   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:13.035478   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:15.578999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:15.593185   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:15.593280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:15.629644   58316 cri.go:89] found id: ""
	I0520 11:48:15.629672   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.629682   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:15.629689   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:15.629749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:15.668639   58316 cri.go:89] found id: ""
	I0520 11:48:15.668670   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.668680   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:15.668688   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:15.668752   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:15.712844   58316 cri.go:89] found id: ""
	I0520 11:48:15.712868   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.712876   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:15.712881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:15.712954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:15.748969   58316 cri.go:89] found id: ""
	I0520 11:48:15.748998   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.749008   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:15.749020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:15.749085   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:15.784625   58316 cri.go:89] found id: ""
	I0520 11:48:15.784658   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.784666   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:15.784672   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:15.784724   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:15.825288   58316 cri.go:89] found id: ""
	I0520 11:48:15.825319   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.825326   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:15.825332   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:15.825390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:15.861589   58316 cri.go:89] found id: ""
	I0520 11:48:15.861617   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.861628   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:15.861635   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:15.861691   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:15.897115   58316 cri.go:89] found id: ""
	I0520 11:48:15.897143   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.897155   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:15.897166   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:15.897188   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:15.971286   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:15.971307   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:15.971323   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:16.049567   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:16.049612   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:16.090116   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:16.090145   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:16.140151   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:16.140187   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:13.058312   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:15.556249   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:13.363188   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:15.363224   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:17.861944   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:16.884628   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:18.885335   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:20.887306   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:18.655048   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:18.671663   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:18.671720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:18.721288   58316 cri.go:89] found id: ""
	I0520 11:48:18.721315   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.721333   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:18.721340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:18.721407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:18.765025   58316 cri.go:89] found id: ""
	I0520 11:48:18.765054   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.765064   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:18.765069   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:18.765135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:18.825054   58316 cri.go:89] found id: ""
	I0520 11:48:18.825083   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.825094   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:18.825110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:18.825172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:18.861528   58316 cri.go:89] found id: ""
	I0520 11:48:18.861550   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.861560   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:18.861566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:18.861630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:18.899943   58316 cri.go:89] found id: ""
	I0520 11:48:18.899970   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.899980   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:18.899986   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:18.900063   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:18.936465   58316 cri.go:89] found id: ""
	I0520 11:48:18.936497   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.936507   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:18.936514   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:18.936570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:18.970599   58316 cri.go:89] found id: ""
	I0520 11:48:18.970636   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.970645   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:18.970653   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:18.970718   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:19.004507   58316 cri.go:89] found id: ""
	I0520 11:48:19.004538   58316 logs.go:276] 0 containers: []
	W0520 11:48:19.004547   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:19.004558   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:19.004573   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:19.057482   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:19.057521   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:19.072189   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:19.072211   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:19.148331   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:19.148356   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:19.148373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:19.228421   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:19.228456   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:18.055244   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:20.056597   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:19.862408   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:21.862760   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:23.386334   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:25.388116   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:21.771146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:21.786547   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:21.786620   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:21.829816   58316 cri.go:89] found id: ""
	I0520 11:48:21.829846   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.829858   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:21.829865   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:21.829925   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:21.867480   58316 cri.go:89] found id: ""
	I0520 11:48:21.867502   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.867510   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:21.867515   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:21.867571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:21.904737   58316 cri.go:89] found id: ""
	I0520 11:48:21.904760   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.904767   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:21.904772   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:21.904841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:21.940431   58316 cri.go:89] found id: ""
	I0520 11:48:21.940457   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.940465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:21.940482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:21.940543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:21.976054   58316 cri.go:89] found id: ""
	I0520 11:48:21.976087   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.976094   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:21.976100   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:21.976158   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:22.020035   58316 cri.go:89] found id: ""
	I0520 11:48:22.020104   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.020116   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:22.020123   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:22.020189   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:22.060981   58316 cri.go:89] found id: ""
	I0520 11:48:22.061002   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.061010   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:22.061015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:22.061059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:22.102854   58316 cri.go:89] found id: ""
	I0520 11:48:22.102875   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.102882   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:22.102892   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:22.102906   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:22.158901   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:22.158933   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.172825   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:22.172850   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:22.246602   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:22.246625   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:22.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:22.325862   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:22.325899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:24.868449   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:24.882851   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:24.882921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:24.922574   58316 cri.go:89] found id: ""
	I0520 11:48:24.922600   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.922610   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:24.922617   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:24.922677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:24.958786   58316 cri.go:89] found id: ""
	I0520 11:48:24.958817   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.958845   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:24.958853   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:24.958906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:24.993346   58316 cri.go:89] found id: ""
	I0520 11:48:24.993375   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.993384   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:24.993391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:24.993450   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:25.026840   58316 cri.go:89] found id: ""
	I0520 11:48:25.026863   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.026870   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:25.026875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:25.026920   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:25.065675   58316 cri.go:89] found id: ""
	I0520 11:48:25.065710   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.065719   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:25.065727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:25.065791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:25.100955   58316 cri.go:89] found id: ""
	I0520 11:48:25.100986   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.100997   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:25.101004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:25.101062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:25.134516   58316 cri.go:89] found id: ""
	I0520 11:48:25.134544   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.134554   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:25.134562   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:25.134626   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:25.174455   58316 cri.go:89] found id: ""
	I0520 11:48:25.174482   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.174491   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:25.174501   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:25.174527   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:25.245557   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:25.245579   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:25.245592   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:25.324197   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:25.324228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:25.362963   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:25.362995   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:25.420778   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:25.420807   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.555469   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:24.556030   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:24.362744   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:26.865098   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:27.884029   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.885238   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:27.935757   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:27.949787   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:27.949873   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:27.985142   58316 cri.go:89] found id: ""
	I0520 11:48:27.985174   58316 logs.go:276] 0 containers: []
	W0520 11:48:27.985185   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:27.985191   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:27.985251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:28.020044   58316 cri.go:89] found id: ""
	I0520 11:48:28.020074   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.020082   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:28.020088   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:28.020135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:28.057075   58316 cri.go:89] found id: ""
	I0520 11:48:28.057114   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.057125   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:28.057136   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:28.057182   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:28.093505   58316 cri.go:89] found id: ""
	I0520 11:48:28.093527   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.093534   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:28.093540   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:28.093596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:28.129402   58316 cri.go:89] found id: ""
	I0520 11:48:28.129431   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.129440   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:28.129447   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:28.129505   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:28.166558   58316 cri.go:89] found id: ""
	I0520 11:48:28.166582   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.166590   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:28.166601   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:28.166649   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:28.206833   58316 cri.go:89] found id: ""
	I0520 11:48:28.206863   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.206873   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:28.206881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:28.206943   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:28.246571   58316 cri.go:89] found id: ""
	I0520 11:48:28.246605   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.246617   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:28.246628   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:28.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:28.297849   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:28.297882   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:28.311578   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:28.311603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:28.387379   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:28.387399   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:28.387410   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:28.466506   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:28.466540   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.007382   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:31.021671   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:31.021751   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:31.059965   58316 cri.go:89] found id: ""
	I0520 11:48:31.059987   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.059996   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:31.060004   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:31.060062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:31.095664   58316 cri.go:89] found id: ""
	I0520 11:48:31.095686   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.095693   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:31.095699   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:31.095749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:31.131510   58316 cri.go:89] found id: ""
	I0520 11:48:31.131531   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.131538   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:31.131544   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:31.131596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:31.166962   58316 cri.go:89] found id: ""
	I0520 11:48:31.167000   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.167011   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:31.167018   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:31.167082   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:31.202831   58316 cri.go:89] found id: ""
	I0520 11:48:31.202856   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.202863   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:31.202868   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:31.202927   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:31.239365   58316 cri.go:89] found id: ""
	I0520 11:48:31.239397   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.239409   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:31.239417   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:31.239477   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:31.276613   58316 cri.go:89] found id: ""
	I0520 11:48:31.276641   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.276651   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:31.276658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:31.276720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:31.312008   58316 cri.go:89] found id: ""
	I0520 11:48:31.312040   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.312047   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:31.312056   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:31.312066   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:31.394355   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:31.394383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.439251   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:31.439277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:31.494313   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:31.494341   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:31.508675   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:31.508698   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:31.576022   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:27.054447   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.056075   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.556385   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.368181   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.862789   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.885337   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:33.885831   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:34.076333   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:34.089640   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:34.089704   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:34.124848   58316 cri.go:89] found id: ""
	I0520 11:48:34.124869   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.124876   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:34.124881   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:34.124948   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:34.159885   58316 cri.go:89] found id: ""
	I0520 11:48:34.159911   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.159920   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:34.159925   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:34.159988   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:34.194029   58316 cri.go:89] found id: ""
	I0520 11:48:34.194051   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.194058   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:34.194063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:34.194135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:34.231412   58316 cri.go:89] found id: ""
	I0520 11:48:34.231435   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.231442   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:34.231448   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:34.231497   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:34.268679   58316 cri.go:89] found id: ""
	I0520 11:48:34.268709   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.268721   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:34.268729   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:34.268795   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:34.313585   58316 cri.go:89] found id: ""
	I0520 11:48:34.313607   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.313613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:34.313618   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:34.313681   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:34.348685   58316 cri.go:89] found id: ""
	I0520 11:48:34.348712   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.348723   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:34.348730   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:34.348788   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:34.393438   58316 cri.go:89] found id: ""
	I0520 11:48:34.393462   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.393471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:34.393480   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:34.393495   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:34.468828   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:34.468859   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:34.468877   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:34.553796   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:34.553827   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:34.594968   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:34.594992   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:34.647814   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:34.647847   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:34.055168   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.056035   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:34.362591   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.863645   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.385081   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:38.884785   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:40.886065   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:37.161573   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:37.176133   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:37.176214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:37.214694   58316 cri.go:89] found id: ""
	I0520 11:48:37.214719   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.214729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:37.214736   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:37.214803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:37.259745   58316 cri.go:89] found id: ""
	I0520 11:48:37.259772   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.259783   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:37.259790   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:37.259852   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:37.299887   58316 cri.go:89] found id: ""
	I0520 11:48:37.299920   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.299931   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:37.299938   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:37.300003   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:37.337428   58316 cri.go:89] found id: ""
	I0520 11:48:37.337456   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.337465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:37.337472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:37.337532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:37.379635   58316 cri.go:89] found id: ""
	I0520 11:48:37.379662   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.379672   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:37.379680   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:37.379739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:37.421071   58316 cri.go:89] found id: ""
	I0520 11:48:37.421104   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.421112   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:37.421118   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:37.421164   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:37.460130   58316 cri.go:89] found id: ""
	I0520 11:48:37.460154   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.460162   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:37.460168   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:37.460225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:37.497028   58316 cri.go:89] found id: ""
	I0520 11:48:37.497053   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.497063   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:37.497074   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:37.497089   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:37.550632   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:37.550659   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:37.566172   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:37.566201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:37.645132   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:37.645150   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:37.645163   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:37.720923   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:37.720966   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:40.260330   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:40.275233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:40.275294   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:40.310450   58316 cri.go:89] found id: ""
	I0520 11:48:40.310479   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.310488   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:40.310494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:40.310543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:40.346171   58316 cri.go:89] found id: ""
	I0520 11:48:40.346200   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.346211   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:40.346217   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:40.346278   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:40.382644   58316 cri.go:89] found id: ""
	I0520 11:48:40.382666   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.382676   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:40.382684   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:40.382739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:40.422950   58316 cri.go:89] found id: ""
	I0520 11:48:40.422974   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.422984   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:40.422990   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:40.423050   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:40.458818   58316 cri.go:89] found id: ""
	I0520 11:48:40.458864   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.458876   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:40.458884   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:40.458935   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:40.496438   58316 cri.go:89] found id: ""
	I0520 11:48:40.496462   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.496471   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:40.496476   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:40.496530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:40.532432   58316 cri.go:89] found id: ""
	I0520 11:48:40.532457   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.532465   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:40.532471   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:40.532530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:40.572628   58316 cri.go:89] found id: ""
	I0520 11:48:40.572652   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.572660   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:40.572667   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:40.572678   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:40.623158   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:40.623189   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:40.636609   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:40.636633   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:40.710836   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:40.710866   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:40.710880   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:40.795386   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:40.795420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:38.556294   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:40.556701   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:39.362479   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:41.861916   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.387356   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:45.884457   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.334999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:43.348481   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:43.348553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:43.385528   58316 cri.go:89] found id: ""
	I0520 11:48:43.385554   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.385564   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:43.385571   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:43.385630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:43.424628   58316 cri.go:89] found id: ""
	I0520 11:48:43.424659   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.424676   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:43.424685   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:43.424746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:43.464043   58316 cri.go:89] found id: ""
	I0520 11:48:43.464075   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.464091   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:43.464097   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:43.464171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:43.501860   58316 cri.go:89] found id: ""
	I0520 11:48:43.501889   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.501899   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:43.501907   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:43.501970   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:43.541881   58316 cri.go:89] found id: ""
	I0520 11:48:43.541910   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.541919   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:43.541926   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:43.541992   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:43.578759   58316 cri.go:89] found id: ""
	I0520 11:48:43.578782   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.578789   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:43.578794   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:43.578842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:43.618012   58316 cri.go:89] found id: ""
	I0520 11:48:43.618042   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.618050   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:43.618058   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:43.618117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.654219   58316 cri.go:89] found id: ""
	I0520 11:48:43.654243   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.654250   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:43.654258   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:43.654273   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:43.667782   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:43.667805   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:43.740111   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:43.740138   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:43.740155   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:43.823936   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:43.823974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:43.865045   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:43.865070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.421300   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:46.434802   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:46.434878   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:46.469361   58316 cri.go:89] found id: ""
	I0520 11:48:46.469392   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.469401   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:46.469407   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:46.469460   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:46.503922   58316 cri.go:89] found id: ""
	I0520 11:48:46.503950   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.503960   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:46.503968   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:46.504025   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:46.542916   58316 cri.go:89] found id: ""
	I0520 11:48:46.542942   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.542950   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:46.542955   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:46.543011   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:46.581532   58316 cri.go:89] found id: ""
	I0520 11:48:46.581550   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.581558   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:46.581563   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:46.581623   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:46.618650   58316 cri.go:89] found id: ""
	I0520 11:48:46.618677   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.618684   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:46.618689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:46.618740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:46.655409   58316 cri.go:89] found id: ""
	I0520 11:48:46.655431   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.655439   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:46.655445   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:46.655491   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:46.690494   58316 cri.go:89] found id: ""
	I0520 11:48:46.690519   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.690526   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:46.690531   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:46.690579   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.056625   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:45.555575   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.864183   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:46.362155   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:47.885261   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:49.886656   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:46.724684   58316 cri.go:89] found id: ""
	I0520 11:48:46.724715   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.724725   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:46.724737   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:46.724752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:46.807899   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:46.807931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:46.850852   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:46.850883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.903474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:46.903502   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:46.917073   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:46.917105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:47.006606   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.507632   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:49.521080   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:49.521160   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:49.558639   58316 cri.go:89] found id: ""
	I0520 11:48:49.558671   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.558678   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:49.558684   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:49.558732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:49.595470   58316 cri.go:89] found id: ""
	I0520 11:48:49.595497   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.595506   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:49.595513   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:49.595573   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:49.630164   58316 cri.go:89] found id: ""
	I0520 11:48:49.630193   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.630204   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:49.630211   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:49.630272   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:49.664835   58316 cri.go:89] found id: ""
	I0520 11:48:49.664863   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.664873   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:49.664882   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:49.664954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:49.700428   58316 cri.go:89] found id: ""
	I0520 11:48:49.700483   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.700497   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:49.700504   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:49.700569   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:49.737184   58316 cri.go:89] found id: ""
	I0520 11:48:49.737216   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.737226   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:49.737234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:49.737303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:49.772577   58316 cri.go:89] found id: ""
	I0520 11:48:49.772601   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.772611   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:49.772617   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:49.772678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:49.806926   58316 cri.go:89] found id: ""
	I0520 11:48:49.806954   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.806966   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:49.806977   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:49.806991   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:49.861208   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:49.861242   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:49.875054   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:49.875084   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:49.949556   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.949582   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:49.949596   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:50.032181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:50.032226   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:48.055288   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:50.057220   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:48.362952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:50.363988   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.863467   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.385482   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:54.386207   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.578486   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:52.592465   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:52.592534   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:52.627795   58316 cri.go:89] found id: ""
	I0520 11:48:52.627828   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.627838   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:52.627845   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:52.627906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:52.663863   58316 cri.go:89] found id: ""
	I0520 11:48:52.663889   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.663896   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:52.663902   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:52.663956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:52.697912   58316 cri.go:89] found id: ""
	I0520 11:48:52.697937   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.697944   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:52.697950   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:52.698010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:52.751379   58316 cri.go:89] found id: ""
	I0520 11:48:52.751401   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.751408   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:52.751413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:52.751473   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:52.786683   58316 cri.go:89] found id: ""
	I0520 11:48:52.786709   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.786716   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:52.786721   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:52.786782   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:52.821720   58316 cri.go:89] found id: ""
	I0520 11:48:52.821747   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.821758   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:52.821765   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:52.821828   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:52.858569   58316 cri.go:89] found id: ""
	I0520 11:48:52.858596   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.858607   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:52.858615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:52.858671   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:52.897524   58316 cri.go:89] found id: ""
	I0520 11:48:52.897551   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.897560   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:52.897569   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:52.897584   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:52.977792   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:52.977816   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:52.977831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:53.064964   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:53.065009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:53.108613   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:53.108649   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:53.164692   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:53.164728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:55.679593   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:55.693505   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:55.693576   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:55.730013   58316 cri.go:89] found id: ""
	I0520 11:48:55.730043   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.730053   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:55.730061   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:55.730121   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:55.769739   58316 cri.go:89] found id: ""
	I0520 11:48:55.769769   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.769780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:55.769787   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:55.769842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:55.809831   58316 cri.go:89] found id: ""
	I0520 11:48:55.809867   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.809878   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:55.809886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:55.809946   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:55.843487   58316 cri.go:89] found id: ""
	I0520 11:48:55.843513   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.843521   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:55.843527   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:55.843586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:55.883327   58316 cri.go:89] found id: ""
	I0520 11:48:55.883352   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.883362   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:55.883372   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:55.883430   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:55.921507   58316 cri.go:89] found id: ""
	I0520 11:48:55.921534   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.921544   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:55.921552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:55.921612   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:55.957042   58316 cri.go:89] found id: ""
	I0520 11:48:55.957066   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.957073   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:55.957079   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:55.957141   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:55.990871   58316 cri.go:89] found id: ""
	I0520 11:48:55.990899   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.990907   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:55.990916   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:55.990931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:56.041345   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:56.041373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:56.055628   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:56.055653   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:56.128845   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:56.128867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:56.128883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:56.218181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:56.218215   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:52.554824   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:54.557133   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:55.362033   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:57.362624   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:56.885083   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:58.885662   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:58.760650   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:58.774991   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:58.775048   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:58.817777   58316 cri.go:89] found id: ""
	I0520 11:48:58.817807   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.817819   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:58.817826   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:58.817885   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:58.855909   58316 cri.go:89] found id: ""
	I0520 11:48:58.855934   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.855942   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:58.855949   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:58.856010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:58.893369   58316 cri.go:89] found id: ""
	I0520 11:48:58.893396   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.893405   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:58.893413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:58.893475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:58.929975   58316 cri.go:89] found id: ""
	I0520 11:48:58.930001   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.930010   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:58.930017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:58.930067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:58.963865   58316 cri.go:89] found id: ""
	I0520 11:48:58.963890   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.963898   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:58.963904   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:58.963954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:59.000604   58316 cri.go:89] found id: ""
	I0520 11:48:59.000624   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.000631   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:59.000637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:59.000685   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:59.037408   58316 cri.go:89] found id: ""
	I0520 11:48:59.037433   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.037442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:59.037452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:59.037512   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:59.073087   58316 cri.go:89] found id: ""
	I0520 11:48:59.073109   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.073118   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:59.073126   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:59.073142   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:59.124885   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:59.124916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:59.154020   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:59.154047   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:59.277149   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:59.277199   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:59.277216   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:59.363126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:59.363157   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:57.055207   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:59.055409   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.554795   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:59.362869   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.363107   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.385471   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:03.386313   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:05.386387   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.906903   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:01.923346   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:01.923407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:01.961612   58316 cri.go:89] found id: ""
	I0520 11:49:01.961635   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.961643   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:01.961648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:01.961702   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:01.997803   58316 cri.go:89] found id: ""
	I0520 11:49:01.997830   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.997839   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:01.997844   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:01.997894   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:02.032142   58316 cri.go:89] found id: ""
	I0520 11:49:02.032169   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.032177   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:02.032182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:02.032226   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:02.070641   58316 cri.go:89] found id: ""
	I0520 11:49:02.070664   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.070671   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:02.070676   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:02.070729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:02.107270   58316 cri.go:89] found id: ""
	I0520 11:49:02.107296   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.107302   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:02.107308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:02.107356   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:02.144676   58316 cri.go:89] found id: ""
	I0520 11:49:02.144702   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.144709   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:02.144717   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:02.144798   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:02.184382   58316 cri.go:89] found id: ""
	I0520 11:49:02.184403   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.184410   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:02.184416   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:02.184466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:02.223110   58316 cri.go:89] found id: ""
	I0520 11:49:02.223140   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.223152   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:02.223164   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:02.223181   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:02.236272   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:02.236305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:02.312046   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:02.312065   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:02.312076   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:02.397284   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:02.397313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:02.438720   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:02.438755   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:04.988565   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:05.005842   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:05.005906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:05.071468   58316 cri.go:89] found id: ""
	I0520 11:49:05.071496   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.071505   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:05.071515   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:05.071574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:05.125767   58316 cri.go:89] found id: ""
	I0520 11:49:05.125793   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.125801   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:05.125806   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:05.125868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:05.162651   58316 cri.go:89] found id: ""
	I0520 11:49:05.162676   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.162684   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:05.162689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:05.162746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:05.205140   58316 cri.go:89] found id: ""
	I0520 11:49:05.205168   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.205176   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:05.205182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:05.205245   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:05.245187   58316 cri.go:89] found id: ""
	I0520 11:49:05.245209   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.245216   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:05.245221   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:05.245280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:05.287698   58316 cri.go:89] found id: ""
	I0520 11:49:05.287728   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.287739   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:05.287746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:05.287809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:05.325607   58316 cri.go:89] found id: ""
	I0520 11:49:05.325637   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.325649   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:05.325655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:05.325716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:05.360662   58316 cri.go:89] found id: ""
	I0520 11:49:05.360690   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.360700   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:05.360711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:05.360725   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:05.402342   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:05.402376   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:05.454403   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:05.454436   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:05.468882   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:05.468912   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:05.546520   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:05.546546   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:05.546567   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:03.555624   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:06.055651   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:03.862582   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:05.862917   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:07.863213   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:07.886494   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.384808   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:08.128785   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:08.144963   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:08.145053   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:08.190121   58316 cri.go:89] found id: ""
	I0520 11:49:08.190154   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.190165   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:08.190174   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:08.190253   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:08.227310   58316 cri.go:89] found id: ""
	I0520 11:49:08.227334   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.227341   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:08.227346   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:08.227390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:08.262606   58316 cri.go:89] found id: ""
	I0520 11:49:08.262631   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.262639   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:08.262646   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:08.262706   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:08.297061   58316 cri.go:89] found id: ""
	I0520 11:49:08.297092   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.297103   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:08.297110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:08.297170   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:08.337356   58316 cri.go:89] found id: ""
	I0520 11:49:08.337394   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.337410   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:08.337418   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:08.337481   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:08.372576   58316 cri.go:89] found id: ""
	I0520 11:49:08.372603   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.372613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:08.372620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:08.372677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:08.409394   58316 cri.go:89] found id: ""
	I0520 11:49:08.409427   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.409442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:08.409449   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:08.409562   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:08.445119   58316 cri.go:89] found id: ""
	I0520 11:49:08.445145   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.445153   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:08.445161   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:08.445171   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:08.457873   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:08.457899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:08.530873   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:08.530902   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:08.530920   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.609655   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:08.609695   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:08.656568   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:08.656601   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.209318   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:11.223883   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:11.223958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:11.260006   58316 cri.go:89] found id: ""
	I0520 11:49:11.260047   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.260060   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:11.260068   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:11.260129   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:11.296348   58316 cri.go:89] found id: ""
	I0520 11:49:11.296375   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.296385   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:11.296392   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:11.296452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:11.336546   58316 cri.go:89] found id: ""
	I0520 11:49:11.336573   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.336583   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:11.336590   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:11.336652   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:11.373216   58316 cri.go:89] found id: ""
	I0520 11:49:11.373244   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.373254   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:11.373261   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:11.373320   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:11.411555   58316 cri.go:89] found id: ""
	I0520 11:49:11.411586   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.411596   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:11.411604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:11.411663   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:11.446390   58316 cri.go:89] found id: ""
	I0520 11:49:11.446419   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.446428   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:11.446433   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:11.446489   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:11.481584   58316 cri.go:89] found id: ""
	I0520 11:49:11.481613   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.481624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:11.481637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:11.481701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:11.518092   58316 cri.go:89] found id: ""
	I0520 11:49:11.518121   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.518131   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:11.518143   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:11.518158   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.569824   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:11.569859   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:11.583600   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:11.583620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:11.663761   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:11.663799   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:11.663817   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.055875   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.056882   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.362663   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:12.863246   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:12.388256   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:14.887822   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:11.743430   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:11.743462   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:14.289940   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:14.307574   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:14.307645   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:14.349602   58316 cri.go:89] found id: ""
	I0520 11:49:14.349628   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.349637   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:14.349644   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:14.349701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:14.391484   58316 cri.go:89] found id: ""
	I0520 11:49:14.391508   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.391516   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:14.391521   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:14.391581   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:14.437859   58316 cri.go:89] found id: ""
	I0520 11:49:14.437884   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.437894   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:14.437902   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:14.437965   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:14.474353   58316 cri.go:89] found id: ""
	I0520 11:49:14.474379   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.474387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:14.474392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:14.474439   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:14.517430   58316 cri.go:89] found id: ""
	I0520 11:49:14.517459   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.517466   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:14.517472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:14.517523   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:14.552166   58316 cri.go:89] found id: ""
	I0520 11:49:14.552196   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.552207   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:14.552220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:14.552280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:14.589295   58316 cri.go:89] found id: ""
	I0520 11:49:14.589322   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.589330   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:14.589336   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:14.589396   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:14.626221   58316 cri.go:89] found id: ""
	I0520 11:49:14.626244   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.626251   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:14.626259   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:14.626271   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:14.681867   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:14.681899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:14.695328   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:14.695355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:14.773077   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:14.773102   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:14.773118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:14.857344   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:14.857378   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:12.557989   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:15.056569   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:14.864153   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:17.362222   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:16.888363   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.385523   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:17.402259   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:17.415895   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:17.415956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:17.458403   58316 cri.go:89] found id: ""
	I0520 11:49:17.458432   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.458443   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:17.458450   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:17.458511   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:17.495603   58316 cri.go:89] found id: ""
	I0520 11:49:17.495633   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.495650   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:17.495657   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:17.495716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:17.535416   58316 cri.go:89] found id: ""
	I0520 11:49:17.535440   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.535448   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:17.535452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:17.535515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:17.574842   58316 cri.go:89] found id: ""
	I0520 11:49:17.574868   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.574875   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:17.574880   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:17.574933   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:17.608744   58316 cri.go:89] found id: ""
	I0520 11:49:17.608773   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.608783   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:17.608791   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:17.608864   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:17.645101   58316 cri.go:89] found id: ""
	I0520 11:49:17.645126   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.645134   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:17.645140   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:17.645199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:17.682620   58316 cri.go:89] found id: ""
	I0520 11:49:17.682651   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.682661   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:17.682669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:17.682725   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:17.718392   58316 cri.go:89] found id: ""
	I0520 11:49:17.718414   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.718422   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:17.718430   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:17.718441   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:17.772103   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:17.772138   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:17.786071   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:17.786102   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:17.859596   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:17.859617   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:17.859634   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:17.938723   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:17.938754   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:20.480539   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:20.493677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:20.493745   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:20.528213   58316 cri.go:89] found id: ""
	I0520 11:49:20.528241   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.528251   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:20.528258   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:20.528344   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:20.564821   58316 cri.go:89] found id: ""
	I0520 11:49:20.564844   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.564853   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:20.564860   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:20.564921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:20.599015   58316 cri.go:89] found id: ""
	I0520 11:49:20.599051   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.599063   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:20.599070   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:20.599125   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:20.637115   58316 cri.go:89] found id: ""
	I0520 11:49:20.637160   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.637169   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:20.637175   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:20.637221   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:20.672699   58316 cri.go:89] found id: ""
	I0520 11:49:20.672728   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.672738   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:20.672745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:20.672809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:20.709278   58316 cri.go:89] found id: ""
	I0520 11:49:20.709309   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.709321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:20.709327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:20.709394   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:20.743159   58316 cri.go:89] found id: ""
	I0520 11:49:20.743182   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.743188   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:20.743194   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:20.743239   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:20.780848   58316 cri.go:89] found id: ""
	I0520 11:49:20.780878   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.780890   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:20.780902   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:20.780917   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:20.834178   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:20.834206   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:20.848026   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:20.848049   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:20.926067   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:20.926088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:20.926103   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:21.003153   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:21.003184   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:17.056820   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.556351   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.362408   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:21.861822   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:21.386338   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.390122   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:25.885748   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.545830   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:23.560697   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:23.560781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:23.595691   58316 cri.go:89] found id: ""
	I0520 11:49:23.595719   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.595729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:23.595737   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:23.595805   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:23.634526   58316 cri.go:89] found id: ""
	I0520 11:49:23.634555   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.634565   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:23.634573   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:23.634631   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:23.670101   58316 cri.go:89] found id: ""
	I0520 11:49:23.670144   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.670155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:23.670169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:23.670228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:23.704352   58316 cri.go:89] found id: ""
	I0520 11:49:23.704376   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.704384   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:23.704389   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:23.704435   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:23.745633   58316 cri.go:89] found id: ""
	I0520 11:49:23.745657   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.745664   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:23.745669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:23.745729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:23.784631   58316 cri.go:89] found id: ""
	I0520 11:49:23.784659   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.784671   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:23.784678   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:23.784742   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:23.819884   58316 cri.go:89] found id: ""
	I0520 11:49:23.819911   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.819918   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:23.819924   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:23.819976   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:23.857396   58316 cri.go:89] found id: ""
	I0520 11:49:23.857427   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.857436   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:23.857444   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:23.857455   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:23.931405   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:23.931428   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:23.931445   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:24.004717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:24.004752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:24.047465   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:24.047499   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:24.098918   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:24.098950   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:26.613820   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:26.627313   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:26.627386   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:26.661418   58316 cri.go:89] found id: ""
	I0520 11:49:26.661450   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.661460   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:26.661467   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:26.661532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:22.057526   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:24.554983   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.864449   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:26.363217   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:28.384733   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:30.884703   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:26.697938   58316 cri.go:89] found id: ""
	I0520 11:49:26.699437   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.699450   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:26.699456   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:26.699506   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:26.735017   58316 cri.go:89] found id: ""
	I0520 11:49:26.735053   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.735062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:26.735066   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:26.735115   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:26.771318   58316 cri.go:89] found id: ""
	I0520 11:49:26.771345   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.771355   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:26.771360   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:26.771410   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:26.806365   58316 cri.go:89] found id: ""
	I0520 11:49:26.806396   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.806408   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:26.806415   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:26.806468   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:26.844289   58316 cri.go:89] found id: ""
	I0520 11:49:26.844313   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.844321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:26.844327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:26.844373   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:26.883496   58316 cri.go:89] found id: ""
	I0520 11:49:26.883523   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.883534   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:26.883541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:26.883600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:26.918427   58316 cri.go:89] found id: ""
	I0520 11:49:26.918451   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.918458   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:26.918465   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:26.918477   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:26.986423   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:26.986444   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:26.986457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:27.066909   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:27.066938   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.110874   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:27.110911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:27.167481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:27.167520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:29.682559   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:29.695460   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:29.695520   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:29.731189   58316 cri.go:89] found id: ""
	I0520 11:49:29.731218   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.731230   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:29.731238   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:29.731297   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:29.766067   58316 cri.go:89] found id: ""
	I0520 11:49:29.766109   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.766120   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:29.766131   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:29.766200   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:29.800840   58316 cri.go:89] found id: ""
	I0520 11:49:29.800868   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.800879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:29.800886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:29.800959   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:29.838971   58316 cri.go:89] found id: ""
	I0520 11:49:29.838999   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.839009   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:29.839017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:29.839078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:29.873954   58316 cri.go:89] found id: ""
	I0520 11:49:29.873977   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.873987   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:29.873994   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:29.874051   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:29.912574   58316 cri.go:89] found id: ""
	I0520 11:49:29.912602   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.912612   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:29.912620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:29.912687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:29.945949   58316 cri.go:89] found id: ""
	I0520 11:49:29.945980   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.945989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:29.945995   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:29.946043   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:29.979804   58316 cri.go:89] found id: ""
	I0520 11:49:29.979830   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.979847   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:29.979856   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:29.979868   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:30.027811   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:30.027844   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:30.043082   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:30.043115   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:30.114796   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:30.114827   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:30.114842   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:30.196523   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:30.196561   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.056582   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:29.057033   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:31.554824   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:28.862736   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:31.362469   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:32.885757   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.385655   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:32.737682   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:32.750903   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:32.750979   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:32.786874   58316 cri.go:89] found id: ""
	I0520 11:49:32.786897   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.786907   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:32.786915   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:32.786968   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:32.821758   58316 cri.go:89] found id: ""
	I0520 11:49:32.821783   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.821794   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:32.821800   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:32.821855   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:32.857680   58316 cri.go:89] found id: ""
	I0520 11:49:32.857700   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.857707   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:32.857712   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:32.857756   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:32.897970   58316 cri.go:89] found id: ""
	I0520 11:49:32.897990   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.897999   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:32.898004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:32.898059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:32.932971   58316 cri.go:89] found id: ""
	I0520 11:49:32.932999   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.933007   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:32.933015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:32.933067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:32.968103   58316 cri.go:89] found id: ""
	I0520 11:49:32.968132   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.968140   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:32.968147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:32.968199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:33.006101   58316 cri.go:89] found id: ""
	I0520 11:49:33.006125   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.006132   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:33.006137   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:33.006184   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:33.043611   58316 cri.go:89] found id: ""
	I0520 11:49:33.043639   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.043650   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:33.043659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:33.043677   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:33.119735   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.119758   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:33.119773   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:33.199361   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:33.199408   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:33.263421   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:33.263452   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:33.315644   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:33.315674   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:35.831034   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:35.845272   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:35.845343   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:35.882951   58316 cri.go:89] found id: ""
	I0520 11:49:35.882980   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.882988   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:35.882994   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:35.883042   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:35.919528   58316 cri.go:89] found id: ""
	I0520 11:49:35.919555   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.919564   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:35.919569   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:35.919618   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:35.957880   58316 cri.go:89] found id: ""
	I0520 11:49:35.957904   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.957912   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:35.957918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:35.957964   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:35.995267   58316 cri.go:89] found id: ""
	I0520 11:49:35.995291   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.995299   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:35.995308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:35.995367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:36.032979   58316 cri.go:89] found id: ""
	I0520 11:49:36.033011   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.033022   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:36.033030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:36.033092   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:36.071270   58316 cri.go:89] found id: ""
	I0520 11:49:36.071293   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.071303   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:36.071311   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:36.071434   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:36.105983   58316 cri.go:89] found id: ""
	I0520 11:49:36.106014   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.106024   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:36.106032   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:36.106094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:36.142978   58316 cri.go:89] found id: ""
	I0520 11:49:36.143006   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.143026   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:36.143037   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:36.143061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:36.227019   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:36.227055   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:36.269608   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:36.269646   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:36.321974   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:36.322009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:36.339332   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:36.339361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:36.417118   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.555033   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.555749   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:33.363377   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.863335   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:37.885668   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.385866   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:38.918143   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:38.933253   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:38.933325   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:38.988894   58316 cri.go:89] found id: ""
	I0520 11:49:38.988940   58316 logs.go:276] 0 containers: []
	W0520 11:49:38.988952   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:38.988961   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:38.989030   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:39.025902   58316 cri.go:89] found id: ""
	I0520 11:49:39.025928   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.025938   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:39.025945   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:39.026007   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:39.061920   58316 cri.go:89] found id: ""
	I0520 11:49:39.061945   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.061954   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:39.061961   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:39.062018   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:39.098125   58316 cri.go:89] found id: ""
	I0520 11:49:39.098152   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.098162   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:39.098169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:39.098229   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:39.139286   58316 cri.go:89] found id: ""
	I0520 11:49:39.139318   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.139328   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:39.139335   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:39.139404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:39.174437   58316 cri.go:89] found id: ""
	I0520 11:49:39.174468   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.174477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:39.174485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:39.174551   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:39.216176   58316 cri.go:89] found id: ""
	I0520 11:49:39.216211   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.216221   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:39.216228   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:39.216290   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:39.254213   58316 cri.go:89] found id: ""
	I0520 11:49:39.254235   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.254243   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:39.254252   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:39.254263   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:39.334325   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:39.334350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:39.334367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:39.411799   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:39.411830   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:39.455093   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:39.455118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:39.507557   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:39.507594   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:37.559287   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.055915   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:38.362779   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.862212   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.862459   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.385950   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:44.387966   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.021921   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:42.035757   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:42.035829   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:42.072072   58316 cri.go:89] found id: ""
	I0520 11:49:42.072103   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.072112   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:42.072118   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:42.072180   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:42.108038   58316 cri.go:89] found id: ""
	I0520 11:49:42.108061   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.108068   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:42.108074   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:42.108130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:42.146464   58316 cri.go:89] found id: ""
	I0520 11:49:42.146492   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.146503   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:42.146510   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:42.146578   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:42.182566   58316 cri.go:89] found id: ""
	I0520 11:49:42.182598   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.182608   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:42.182615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:42.182688   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:42.220159   58316 cri.go:89] found id: ""
	I0520 11:49:42.220191   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.220201   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:42.220209   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:42.220268   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:42.255952   58316 cri.go:89] found id: ""
	I0520 11:49:42.255975   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.255982   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:42.255988   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:42.256036   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:42.294347   58316 cri.go:89] found id: ""
	I0520 11:49:42.294376   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.294387   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:42.294394   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:42.294456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:42.332019   58316 cri.go:89] found id: ""
	I0520 11:49:42.332043   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.332050   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:42.332058   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:42.332074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.408479   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:42.408520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:42.445579   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:42.445620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:42.496278   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:42.496312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:42.510258   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:42.510292   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:42.582179   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.082674   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:45.096690   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:45.096749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:45.132031   58316 cri.go:89] found id: ""
	I0520 11:49:45.132054   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.132068   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:45.132074   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:45.132126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:45.167575   58316 cri.go:89] found id: ""
	I0520 11:49:45.167601   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.167612   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:45.167619   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:45.167679   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:45.202021   58316 cri.go:89] found id: ""
	I0520 11:49:45.202047   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.202054   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:45.202059   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:45.202113   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:45.236630   58316 cri.go:89] found id: ""
	I0520 11:49:45.236659   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.236669   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:45.236677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:45.236740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:45.272509   58316 cri.go:89] found id: ""
	I0520 11:49:45.272536   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.272543   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:45.272549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:45.272606   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:45.308221   58316 cri.go:89] found id: ""
	I0520 11:49:45.308242   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.308250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:45.308256   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:45.308298   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:45.344202   58316 cri.go:89] found id: ""
	I0520 11:49:45.344226   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.344233   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:45.344239   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:45.344299   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:45.392276   58316 cri.go:89] found id: ""
	I0520 11:49:45.392304   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.392315   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:45.392325   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:45.392336   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:45.451325   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:45.451355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:45.509835   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:45.509867   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:45.523790   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:45.523815   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:45.599837   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.599867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:45.599886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.056415   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:44.555806   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:45.363614   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:47.863671   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:46.884916   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:48.885117   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:48.175148   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:48.188998   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:48.189102   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:48.225492   58316 cri.go:89] found id: ""
	I0520 11:49:48.225528   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.225538   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:48.225548   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:48.225611   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:48.259197   58316 cri.go:89] found id: ""
	I0520 11:49:48.259221   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.259231   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:48.259237   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:48.259296   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:48.298760   58316 cri.go:89] found id: ""
	I0520 11:49:48.298786   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.298796   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:48.298803   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:48.298876   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:48.339188   58316 cri.go:89] found id: ""
	I0520 11:49:48.339217   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.339227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:48.339234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:48.339302   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:48.378659   58316 cri.go:89] found id: ""
	I0520 11:49:48.378689   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.378699   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:48.378706   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:48.378763   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:48.417637   58316 cri.go:89] found id: ""
	I0520 11:49:48.417664   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.417674   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:48.417681   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:48.417757   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:48.461412   58316 cri.go:89] found id: ""
	I0520 11:49:48.461439   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.461450   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:48.461457   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:48.461521   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:48.502919   58316 cri.go:89] found id: ""
	I0520 11:49:48.502942   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.502949   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:48.502958   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:48.502974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:48.557007   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:48.557037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:48.571457   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:48.571485   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:48.642339   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:48.642362   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:48.642379   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:48.720711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:48.720750   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:51.262685   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:51.276276   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:51.276352   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:51.313139   58316 cri.go:89] found id: ""
	I0520 11:49:51.313169   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.313178   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:51.313184   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:51.313233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:51.349742   58316 cri.go:89] found id: ""
	I0520 11:49:51.349771   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.349781   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:51.349789   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:51.349847   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:51.388819   58316 cri.go:89] found id: ""
	I0520 11:49:51.388856   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.388868   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:51.388875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:51.388958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:51.424734   58316 cri.go:89] found id: ""
	I0520 11:49:51.424769   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.424779   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:51.424789   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:51.424867   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:51.464135   58316 cri.go:89] found id: ""
	I0520 11:49:51.464163   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.464172   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:51.464178   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:51.464241   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:51.505948   58316 cri.go:89] found id: ""
	I0520 11:49:51.505974   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.505984   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:51.505992   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:51.506057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:51.542350   58316 cri.go:89] found id: ""
	I0520 11:49:51.542378   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.542389   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:51.542397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:51.542461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:51.577991   58316 cri.go:89] found id: ""
	I0520 11:49:51.578022   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.578031   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:51.578043   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:51.578058   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:51.630392   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:51.630427   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:51.645173   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:51.645198   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:49:47.056668   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:49.554453   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:51.555691   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:50.363307   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:52.862785   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:51.386280   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:53.386860   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:55.886047   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	W0520 11:49:51.715271   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:51.715297   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:51.715312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:51.798792   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:51.798831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.343631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:54.358716   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:54.358791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:54.394117   58316 cri.go:89] found id: ""
	I0520 11:49:54.394145   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.394154   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:54.394162   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:54.394219   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:54.435611   58316 cri.go:89] found id: ""
	I0520 11:49:54.435636   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.435644   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:54.435652   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:54.435711   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:54.469438   58316 cri.go:89] found id: ""
	I0520 11:49:54.469464   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.469474   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:54.469482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:54.469544   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:54.508113   58316 cri.go:89] found id: ""
	I0520 11:49:54.508143   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.508154   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:54.508161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:54.508223   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:54.544504   58316 cri.go:89] found id: ""
	I0520 11:49:54.544528   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.544536   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:54.544541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:54.544598   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:54.584691   58316 cri.go:89] found id: ""
	I0520 11:49:54.584725   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.584735   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:54.584743   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:54.584804   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:54.618505   58316 cri.go:89] found id: ""
	I0520 11:49:54.618525   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.618532   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:54.618538   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:54.618589   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:54.652670   58316 cri.go:89] found id: ""
	I0520 11:49:54.652701   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.652711   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:54.652721   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:54.652741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.692782   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:54.692808   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:54.746300   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:54.746340   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:54.760324   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:54.760357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:54.835091   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:54.835118   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:54.835130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:54.055131   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:56.055386   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:54.863545   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:57.363023   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:58.385457   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:00.385971   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:57.422176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:57.437369   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:57.437429   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:57.472083   58316 cri.go:89] found id: ""
	I0520 11:49:57.472119   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.472130   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:57.472137   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:57.472197   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:57.507307   58316 cri.go:89] found id: ""
	I0520 11:49:57.507332   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.507339   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:57.507345   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:57.507405   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:57.548112   58316 cri.go:89] found id: ""
	I0520 11:49:57.548144   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.548155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:57.548162   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:57.548227   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:57.585751   58316 cri.go:89] found id: ""
	I0520 11:49:57.585790   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.585801   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:57.585809   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:57.585879   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:57.626577   58316 cri.go:89] found id: ""
	I0520 11:49:57.626609   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.626620   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:57.626628   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:57.626687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:57.662402   58316 cri.go:89] found id: ""
	I0520 11:49:57.662429   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.662436   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:57.662441   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:57.662503   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:57.695393   58316 cri.go:89] found id: ""
	I0520 11:49:57.695417   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.695424   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:57.695429   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:57.695487   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:57.730011   58316 cri.go:89] found id: ""
	I0520 11:49:57.730039   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.730049   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:57.730059   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:57.730070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:57.807727   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:57.807763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:57.853562   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:57.853595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:57.909322   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:57.909357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:57.923277   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:57.923305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:57.994858   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.495176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:00.508821   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:00.508889   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:00.543001   58316 cri.go:89] found id: ""
	I0520 11:50:00.543028   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.543045   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:00.543057   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:00.543117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:00.582547   58316 cri.go:89] found id: ""
	I0520 11:50:00.582574   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.582590   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:00.582598   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:00.582659   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:00.617962   58316 cri.go:89] found id: ""
	I0520 11:50:00.617993   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.618003   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:00.618011   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:00.618073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:00.655065   58316 cri.go:89] found id: ""
	I0520 11:50:00.655099   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.655110   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:00.655117   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:00.655177   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:00.689497   58316 cri.go:89] found id: ""
	I0520 11:50:00.689521   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.689528   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:00.689533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:00.689586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:00.723710   58316 cri.go:89] found id: ""
	I0520 11:50:00.723743   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.723754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:00.723761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:00.723821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:00.757900   58316 cri.go:89] found id: ""
	I0520 11:50:00.757943   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.757950   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:00.757956   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:00.758014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:00.792196   58316 cri.go:89] found id: ""
	I0520 11:50:00.792223   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.792232   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:00.792242   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:00.792257   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:00.845759   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:00.845791   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:00.860633   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:00.860658   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:00.936030   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.936050   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:00.936061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:01.011562   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:01.011602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:58.554785   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:00.555726   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:59.365132   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:01.863538   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:02.884908   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:05.384186   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:03.554533   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:03.568017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:03.568086   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:03.605583   58316 cri.go:89] found id: ""
	I0520 11:50:03.605610   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.605619   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:03.605626   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:03.605683   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.640875   58316 cri.go:89] found id: ""
	I0520 11:50:03.640905   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.640915   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:03.640939   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:03.641004   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:03.676064   58316 cri.go:89] found id: ""
	I0520 11:50:03.676088   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.676096   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:03.676103   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:03.676155   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:03.710791   58316 cri.go:89] found id: ""
	I0520 11:50:03.710815   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.710823   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:03.710828   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:03.710883   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:03.746348   58316 cri.go:89] found id: ""
	I0520 11:50:03.746375   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.746386   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:03.746392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:03.746451   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:03.782220   58316 cri.go:89] found id: ""
	I0520 11:50:03.782244   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.782253   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:03.782259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:03.782319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:03.819678   58316 cri.go:89] found id: ""
	I0520 11:50:03.819712   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.819724   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:03.819731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:03.819789   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:03.856137   58316 cri.go:89] found id: ""
	I0520 11:50:03.856163   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.856172   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:03.856180   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:03.856191   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:03.914360   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:03.914391   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:03.929401   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:03.929431   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:03.999763   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:03.999882   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:03.999905   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:04.076921   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:04.076971   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:06.623577   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:06.636265   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:06.636329   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:06.670653   58316 cri.go:89] found id: ""
	I0520 11:50:06.670682   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.670692   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:06.670698   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:06.670790   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.054799   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:05.056653   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:03.865619   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:06.362355   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:07.387760   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:09.885607   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:06.703498   58316 cri.go:89] found id: ""
	I0520 11:50:06.703528   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.703537   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:06.703543   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:06.703600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:06.736579   58316 cri.go:89] found id: ""
	I0520 11:50:06.736606   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.736615   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:06.736620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:06.736687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:06.774584   58316 cri.go:89] found id: ""
	I0520 11:50:06.774608   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.774615   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:06.774625   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:06.774682   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:06.810565   58316 cri.go:89] found id: ""
	I0520 11:50:06.810591   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.810599   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:06.810604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:06.810666   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:06.845063   58316 cri.go:89] found id: ""
	I0520 11:50:06.845101   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.845113   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:06.845121   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:06.845208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:06.884560   58316 cri.go:89] found id: ""
	I0520 11:50:06.884585   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.884592   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:06.884597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:06.884650   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:06.923422   58316 cri.go:89] found id: ""
	I0520 11:50:06.923450   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.923461   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:06.923474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:06.923490   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:06.937831   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:06.937856   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:07.008877   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.008897   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:07.008907   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:07.091174   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:07.091208   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:07.128889   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:07.128922   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:09.686881   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:09.700771   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:09.700842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:09.737129   58316 cri.go:89] found id: ""
	I0520 11:50:09.737156   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.737166   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:09.737172   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:09.737233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:09.772130   58316 cri.go:89] found id: ""
	I0520 11:50:09.772164   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.772175   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:09.772182   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:09.772251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:09.811033   58316 cri.go:89] found id: ""
	I0520 11:50:09.811059   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.811066   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:09.811072   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:09.811127   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:09.849809   58316 cri.go:89] found id: ""
	I0520 11:50:09.849833   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.849841   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:09.849849   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:09.849911   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:09.888216   58316 cri.go:89] found id: ""
	I0520 11:50:09.888241   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.888251   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:09.888258   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:09.888319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:09.925704   58316 cri.go:89] found id: ""
	I0520 11:50:09.925733   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.925742   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:09.925749   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:09.925807   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:09.961823   58316 cri.go:89] found id: ""
	I0520 11:50:09.961846   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.961853   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:09.961858   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:09.961913   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:09.998256   58316 cri.go:89] found id: ""
	I0520 11:50:09.998280   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.998288   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:09.998304   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:09.998319   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:10.077998   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:10.078041   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:10.116358   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:10.116383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:10.169638   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:10.169670   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:10.184056   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:10.184082   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:10.256798   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.555902   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:09.558035   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:08.366901   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:10.862545   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:12.862664   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:12.757663   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:12.772331   58316 kubeadm.go:591] duration metric: took 4m3.848225092s to restartPrimaryControlPlane
	W0520 11:50:12.772412   58316 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:50:12.772450   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:50:13.531871   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:50:13.547475   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:50:13.558690   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:50:13.568708   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:50:13.568730   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:50:13.568772   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:50:13.579016   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:50:13.579078   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:50:13.590821   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:50:13.600567   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:50:13.600614   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:50:13.610747   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.619942   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:50:13.619981   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.629831   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:50:13.638940   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:50:13.638991   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:50:13.648962   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:50:13.714817   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:50:13.714873   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:50:13.872737   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:50:13.872919   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:50:13.873055   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:50:14.078057   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:50:14.080202   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:50:14.080339   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:50:14.080431   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:50:14.080563   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:50:14.080660   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:50:14.080763   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:50:14.080842   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:50:14.081017   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:50:14.081620   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:50:14.081988   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:50:14.082494   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:50:14.082552   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:50:14.082602   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:50:14.157464   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:50:14.556498   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:50:14.754304   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:50:15.061126   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:50:15.079978   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:50:15.081069   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:50:15.081248   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:50:15.222910   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:50:12.385235   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.886286   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:15.225078   58316 out.go:204]   - Booting up control plane ...
	I0520 11:50:15.225186   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:50:15.232621   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:50:15.233678   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:50:15.234394   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:50:15.236632   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:50:12.055821   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.556030   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.865340   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.362603   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.388804   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.885030   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.056232   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.555242   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.555339   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.363446   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.861989   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.886042   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:24.384353   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:23.555974   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:26.054899   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:23.862506   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:25.862952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:26.384558   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.384818   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.387241   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.055327   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.055628   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.365187   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.862199   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.863892   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.885393   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.384549   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.555087   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.055509   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.362103   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.362430   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.386206   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.386258   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.055828   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.554574   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.554821   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.862750   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.862973   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.885277   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:43.886162   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:43.555316   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:45.556417   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:44.364091   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:46.364286   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:46.385011   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.385754   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:50.387956   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.055237   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:50.555567   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.362404   58398 pod_ready.go:81] duration metric: took 4m0.006145654s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	E0520 11:50:48.362424   58398 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:50:48.362431   58398 pod_ready.go:38] duration metric: took 4m3.605442277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:50:48.362444   58398 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:50:48.362469   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:48.362528   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:48.422343   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:48.422361   58398 cri.go:89] found id: ""
	I0520 11:50:48.422369   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:48.422428   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.427448   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:48.427512   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:48.477327   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:48.477353   58398 cri.go:89] found id: ""
	I0520 11:50:48.477361   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:48.477418   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.482113   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:48.482161   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:48.525019   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:48.525039   58398 cri.go:89] found id: ""
	I0520 11:50:48.525046   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:48.525087   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.529544   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:48.529616   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:48.571031   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:48.571055   58398 cri.go:89] found id: ""
	I0520 11:50:48.571065   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:48.571117   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.575601   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:48.575653   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:48.617086   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:48.617111   58398 cri.go:89] found id: ""
	I0520 11:50:48.617120   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:48.617174   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.623512   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:48.623585   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:48.660141   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:48.660170   58398 cri.go:89] found id: ""
	I0520 11:50:48.660179   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:48.660236   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.665578   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:48.665648   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:48.704366   58398 cri.go:89] found id: ""
	I0520 11:50:48.704398   58398 logs.go:276] 0 containers: []
	W0520 11:50:48.704408   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:48.704418   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:48.704482   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:48.745897   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:48.745925   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:48.745931   58398 cri.go:89] found id: ""
	I0520 11:50:48.745940   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:48.745997   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.750982   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.755064   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:48.755084   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:48.804692   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:48.804720   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:48.862735   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:48.862765   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:48.914854   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:48.914885   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:48.930449   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:48.930473   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:48.980371   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:48.980398   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:49.019847   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:49.019871   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:49.066103   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:49.066131   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:49.102912   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:49.102938   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:49.148380   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:49.148403   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:49.184436   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:49.184466   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:49.658226   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:49.658267   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:49.715674   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:49.715724   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:52.360868   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:52.378573   58398 api_server.go:72] duration metric: took 4m14.818629715s to wait for apiserver process to appear ...
	I0520 11:50:52.378595   58398 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:50:52.378634   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:52.378694   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:52.423033   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:52.423057   58398 cri.go:89] found id: ""
	I0520 11:50:52.423067   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:52.423124   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.427361   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:52.427422   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:52.472357   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:52.472383   58398 cri.go:89] found id: ""
	I0520 11:50:52.472393   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:52.472452   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.477472   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:52.477536   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:52.517721   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:52.517742   58398 cri.go:89] found id: ""
	I0520 11:50:52.517750   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:52.517806   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.522347   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:52.522415   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:52.562720   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:52.562747   58398 cri.go:89] found id: ""
	I0520 11:50:52.562757   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:52.562815   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.567322   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:52.567378   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:52.605752   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:52.605779   58398 cri.go:89] found id: ""
	I0520 11:50:52.605787   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:52.605841   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.609932   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:52.609996   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:52.646704   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:52.646734   58398 cri.go:89] found id: ""
	I0520 11:50:52.646745   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:52.646805   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.651422   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:52.651493   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:52.690880   58398 cri.go:89] found id: ""
	I0520 11:50:52.690909   58398 logs.go:276] 0 containers: []
	W0520 11:50:52.690919   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:52.690927   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:52.690997   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:52.733525   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:52.733558   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:52.733562   58398 cri.go:89] found id: ""
	I0520 11:50:52.733568   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:52.733615   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.738004   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.743086   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:52.743108   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:52.795042   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:52.795078   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:52.809356   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:52.809384   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:52.854486   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:52.854517   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:52.904266   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:52.904289   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:52.941257   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:52.941285   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:52.886142   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.385782   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.237878   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:50:55.238408   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:50:55.238664   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:50:53.057541   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.554909   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:52.978583   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:52.978613   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:53.081246   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:53.081276   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:53.124472   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:53.124509   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:53.167027   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:53.167055   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:53.223091   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:53.223125   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:53.261390   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:53.261417   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:53.680820   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:53.680877   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:56.227910   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:50:56.233856   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I0520 11:50:56.234827   58398 api_server.go:141] control plane version: v1.30.1
	I0520 11:50:56.234852   58398 api_server.go:131] duration metric: took 3.856250847s to wait for apiserver health ...
	I0520 11:50:56.234859   58398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:50:56.234880   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:56.234923   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:56.273520   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:56.273539   58398 cri.go:89] found id: ""
	I0520 11:50:56.273545   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:56.273589   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.278121   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:56.278181   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:56.311934   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:56.311958   58398 cri.go:89] found id: ""
	I0520 11:50:56.311966   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:56.312011   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.315972   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:56.316033   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:56.351237   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:56.351259   58398 cri.go:89] found id: ""
	I0520 11:50:56.351268   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:56.351323   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.355480   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:56.355528   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:56.389768   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:56.389782   58398 cri.go:89] found id: ""
	I0520 11:50:56.389789   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:56.389829   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.393815   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:56.393869   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:56.436557   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:56.436582   58398 cri.go:89] found id: ""
	I0520 11:50:56.436592   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:56.436637   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.440866   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:56.440943   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:56.480688   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:56.480734   58398 cri.go:89] found id: ""
	I0520 11:50:56.480744   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:56.480814   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.485880   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:56.485974   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:56.521965   58398 cri.go:89] found id: ""
	I0520 11:50:56.521992   58398 logs.go:276] 0 containers: []
	W0520 11:50:56.521999   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:56.522005   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:56.522063   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:56.567585   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:56.567611   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:56.567617   58398 cri.go:89] found id: ""
	I0520 11:50:56.567627   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:56.567677   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.572972   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.577459   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:56.577484   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:56.636435   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:56.636467   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:56.675690   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:56.675714   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:56.720035   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:56.720065   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:56.734275   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:56.734302   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:56.837237   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:56.837268   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:56.885482   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:56.885517   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:56.922276   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:56.922302   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:56.971119   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:56.971148   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:57.010383   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:57.010414   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:57.044880   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:57.044909   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:57.097906   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:57.097936   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:57.147600   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:57.147626   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:59.998576   58398 system_pods.go:59] 8 kube-system pods found
	I0520 11:50:59.998601   58398 system_pods.go:61] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running
	I0520 11:50:59.998605   58398 system_pods.go:61] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running
	I0520 11:50:59.998609   58398 system_pods.go:61] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running
	I0520 11:50:59.998613   58398 system_pods.go:61] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running
	I0520 11:50:59.998617   58398 system_pods.go:61] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running
	I0520 11:50:59.998621   58398 system_pods.go:61] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running
	I0520 11:50:59.998627   58398 system_pods.go:61] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:50:59.998631   58398 system_pods.go:61] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running
	I0520 11:50:59.998639   58398 system_pods.go:74] duration metric: took 3.763773626s to wait for pod list to return data ...
	I0520 11:50:59.998645   58398 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:51:00.001599   58398 default_sa.go:45] found service account: "default"
	I0520 11:51:00.001619   58398 default_sa.go:55] duration metric: took 2.967907ms for default service account to be created ...
	I0520 11:51:00.001626   58398 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:51:00.007124   58398 system_pods.go:86] 8 kube-system pods found
	I0520 11:51:00.007151   58398 system_pods.go:89] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running
	I0520 11:51:00.007156   58398 system_pods.go:89] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running
	I0520 11:51:00.007161   58398 system_pods.go:89] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running
	I0520 11:51:00.007168   58398 system_pods.go:89] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running
	I0520 11:51:00.007175   58398 system_pods.go:89] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running
	I0520 11:51:00.007182   58398 system_pods.go:89] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running
	I0520 11:51:00.007194   58398 system_pods.go:89] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:00.007205   58398 system_pods.go:89] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running
	I0520 11:51:00.007215   58398 system_pods.go:126] duration metric: took 5.582307ms to wait for k8s-apps to be running ...
	I0520 11:51:00.007226   58398 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:51:00.007281   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:51:00.023780   58398 system_svc.go:56] duration metric: took 16.545966ms WaitForService to wait for kubelet
	I0520 11:51:00.023806   58398 kubeadm.go:576] duration metric: took 4m22.463864802s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:51:00.023830   58398 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:51:00.026757   58398 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:51:00.026775   58398 node_conditions.go:123] node cpu capacity is 2
	I0520 11:51:00.026788   58398 node_conditions.go:105] duration metric: took 2.951465ms to run NodePressure ...
	I0520 11:51:00.026798   58398 start.go:240] waiting for startup goroutines ...
	I0520 11:51:00.026804   58398 start.go:245] waiting for cluster config update ...
	I0520 11:51:00.026814   58398 start.go:254] writing updated cluster config ...
	I0520 11:51:00.027094   58398 ssh_runner.go:195] Run: rm -f paused
	I0520 11:51:00.078104   58398 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:51:00.079923   58398 out.go:177] * Done! kubectl is now configured to use "embed-certs-274976" cluster and "default" namespace by default
	I0520 11:50:57.885337   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:00.385874   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:00.239814   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:00.240101   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:50:57.554979   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:59.555797   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:02.886892   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:05.384747   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:02.060543   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:04.555457   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:06.556619   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:07.386407   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:09.885284   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:10.240955   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:10.241197   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:09.055313   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:11.056015   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:11.886887   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:13.887204   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:15.888591   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:12.055930   58840 pod_ready.go:81] duration metric: took 4m0.00701193s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	E0520 11:51:12.055967   58840 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:51:12.055978   58840 pod_ready.go:38] duration metric: took 4m5.378750595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:51:12.055998   58840 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:51:12.056034   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:12.056101   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:12.115475   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:12.115503   58840 cri.go:89] found id: ""
	I0520 11:51:12.115513   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:12.115568   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.120850   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:12.120940   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:12.161022   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:12.161055   58840 cri.go:89] found id: ""
	I0520 11:51:12.161062   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:12.161130   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.165525   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:12.165586   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:12.211862   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:12.211890   58840 cri.go:89] found id: ""
	I0520 11:51:12.211899   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:12.211959   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.217037   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:12.217114   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:12.256985   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:12.257010   58840 cri.go:89] found id: ""
	I0520 11:51:12.257018   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:12.257064   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.261765   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:12.261825   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:12.299745   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:12.299773   58840 cri.go:89] found id: ""
	I0520 11:51:12.299781   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:12.299842   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.307681   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:12.307752   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:12.346729   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:12.346754   58840 cri.go:89] found id: ""
	I0520 11:51:12.346763   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:12.346819   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.352034   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:12.352096   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:12.394387   58840 cri.go:89] found id: ""
	I0520 11:51:12.394405   58840 logs.go:276] 0 containers: []
	W0520 11:51:12.394412   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:12.394416   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:12.394460   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:12.434198   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:12.434224   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:12.434229   58840 cri.go:89] found id: ""
	I0520 11:51:12.434238   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:12.434298   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.439424   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.444096   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:12.444113   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:12.501635   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:12.501668   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:12.516529   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:12.516557   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:12.662690   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:12.662723   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:12.724415   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:12.724455   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:12.767417   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:12.767446   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:12.815739   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:12.815775   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:12.860515   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:12.860543   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:12.900348   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:12.900377   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:12.948489   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:12.948518   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:12.996999   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:12.997030   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:13.034866   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:13.034896   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:13.532828   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:13.532875   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:16.083150   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:51:16.106110   58840 api_server.go:72] duration metric: took 4m16.64390235s to wait for apiserver process to appear ...
	I0520 11:51:16.106155   58840 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:51:16.106198   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:16.106262   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:16.148418   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:16.148450   58840 cri.go:89] found id: ""
	I0520 11:51:16.148459   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:16.148527   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.153400   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:16.153456   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:16.196145   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:16.196167   58840 cri.go:89] found id: ""
	I0520 11:51:16.196175   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:16.196231   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.200792   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:16.200857   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:16.244411   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:16.244435   58840 cri.go:89] found id: ""
	I0520 11:51:16.244446   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:16.244497   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.249637   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:16.249707   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:16.288520   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:16.288542   58840 cri.go:89] found id: ""
	I0520 11:51:16.288550   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:16.288613   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.293307   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:16.293377   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:16.336963   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:16.336988   58840 cri.go:89] found id: ""
	I0520 11:51:16.336998   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:16.337053   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.342155   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:16.342236   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:16.390589   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:16.390610   58840 cri.go:89] found id: ""
	I0520 11:51:16.390617   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:16.390672   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.395319   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:16.395367   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:16.444947   58840 cri.go:89] found id: ""
	I0520 11:51:16.444974   58840 logs.go:276] 0 containers: []
	W0520 11:51:16.444985   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:16.445005   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:16.445069   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:16.485389   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:16.485414   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:16.485418   58840 cri.go:89] found id: ""
	I0520 11:51:16.485425   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:16.485481   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.490036   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.494650   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:16.494678   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:16.510820   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:16.510847   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:16.549583   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:16.549616   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:16.593009   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:16.593036   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:16.644523   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:16.644551   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:16.686326   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:16.686355   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:16.753308   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:16.753339   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:16.799568   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:16.799600   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:16.858422   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:16.858454   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:18.385869   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:20.386467   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:16.983556   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:16.983591   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:17.034851   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:17.034885   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:17.078220   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:17.078247   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:17.120946   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:17.120979   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:20.055408   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:51:20.059619   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 200:
	ok
	I0520 11:51:20.060649   58840 api_server.go:141] control plane version: v1.30.1
	I0520 11:51:20.060671   58840 api_server.go:131] duration metric: took 3.954509172s to wait for apiserver health ...
	I0520 11:51:20.060679   58840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:51:20.060698   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:20.060745   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:20.099666   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:20.099687   58840 cri.go:89] found id: ""
	I0520 11:51:20.099695   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:20.099746   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.104235   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:20.104286   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:20.162860   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:20.162881   58840 cri.go:89] found id: ""
	I0520 11:51:20.162887   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:20.162936   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.168215   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:20.168267   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:20.208871   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:20.208900   58840 cri.go:89] found id: ""
	I0520 11:51:20.208909   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:20.208982   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.213637   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:20.213717   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:20.253194   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:20.253219   58840 cri.go:89] found id: ""
	I0520 11:51:20.253230   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:20.253286   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.258190   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:20.258261   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:20.299693   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:20.299713   58840 cri.go:89] found id: ""
	I0520 11:51:20.299720   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:20.299779   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.308104   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:20.308165   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:20.348954   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:20.348977   58840 cri.go:89] found id: ""
	I0520 11:51:20.348985   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:20.349030   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.353738   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:20.353798   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:20.388466   58840 cri.go:89] found id: ""
	I0520 11:51:20.388486   58840 logs.go:276] 0 containers: []
	W0520 11:51:20.388496   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:20.388504   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:20.388564   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:20.430628   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:20.430653   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:20.430659   58840 cri.go:89] found id: ""
	I0520 11:51:20.430668   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:20.430714   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.435241   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.439312   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:20.439332   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:20.476361   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:20.476388   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:20.514317   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:20.514345   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:20.566557   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:20.566604   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:20.582249   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:20.582279   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:20.639530   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:20.639568   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:20.676804   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:20.676837   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:20.713661   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:20.713691   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:20.757100   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:20.757125   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:20.805824   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:20.805859   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:20.921508   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:20.921540   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:20.968752   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:20.968785   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:21.024773   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:21.024802   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:23.915504   58840 system_pods.go:59] 8 kube-system pods found
	I0520 11:51:23.915536   58840 system_pods.go:61] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running
	I0520 11:51:23.915540   58840 system_pods.go:61] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running
	I0520 11:51:23.915544   58840 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running
	I0520 11:51:23.915549   58840 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running
	I0520 11:51:23.915552   58840 system_pods.go:61] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:51:23.915555   58840 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running
	I0520 11:51:23.915562   58840 system_pods.go:61] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:23.915566   58840 system_pods.go:61] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:51:23.915574   58840 system_pods.go:74] duration metric: took 3.854889521s to wait for pod list to return data ...
	I0520 11:51:23.915582   58840 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:51:23.917651   58840 default_sa.go:45] found service account: "default"
	I0520 11:51:23.917672   58840 default_sa.go:55] duration metric: took 2.084872ms for default service account to be created ...
	I0520 11:51:23.917679   58840 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:51:23.922566   58840 system_pods.go:86] 8 kube-system pods found
	I0520 11:51:23.922589   58840 system_pods.go:89] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running
	I0520 11:51:23.922594   58840 system_pods.go:89] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running
	I0520 11:51:23.922599   58840 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running
	I0520 11:51:23.922604   58840 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running
	I0520 11:51:23.922608   58840 system_pods.go:89] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:51:23.922612   58840 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running
	I0520 11:51:23.922618   58840 system_pods.go:89] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:23.922623   58840 system_pods.go:89] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:51:23.922630   58840 system_pods.go:126] duration metric: took 4.946206ms to wait for k8s-apps to be running ...
	I0520 11:51:23.922638   58840 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:51:23.922679   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:51:23.941348   58840 system_svc.go:56] duration metric: took 18.700356ms WaitForService to wait for kubelet
	I0520 11:51:23.941388   58840 kubeadm.go:576] duration metric: took 4m24.479184818s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:51:23.941414   58840 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:51:23.945372   58840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:51:23.945400   58840 node_conditions.go:123] node cpu capacity is 2
	I0520 11:51:23.945414   58840 node_conditions.go:105] duration metric: took 3.993897ms to run NodePressure ...
	I0520 11:51:23.945428   58840 start.go:240] waiting for startup goroutines ...
	I0520 11:51:23.945437   58840 start.go:245] waiting for cluster config update ...
	I0520 11:51:23.945450   58840 start.go:254] writing updated cluster config ...
	I0520 11:51:23.945739   58840 ssh_runner.go:195] Run: rm -f paused
	I0520 11:51:23.997255   58840 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:51:23.999485   58840 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-668445" cluster and "default" namespace by default
	I0520 11:51:22.386549   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:24.885200   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:26.887178   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:29.384900   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:30.242152   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:30.242431   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:31.386214   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:33.386309   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:35.379767   57519 pod_ready.go:81] duration metric: took 4m0.000946428s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" ...
	E0520 11:51:35.379806   57519 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0520 11:51:35.379826   57519 pod_ready.go:38] duration metric: took 4m4.525892516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:51:35.379854   57519 kubeadm.go:591] duration metric: took 4m13.966188007s to restartPrimaryControlPlane
	W0520 11:51:35.379905   57519 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:51:35.379929   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:07.074203   57519 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.694253213s)
	I0520 11:52:07.074270   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:07.089871   57519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:52:07.100708   57519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:07.111260   57519 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:07.111283   57519 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:07.111333   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:07.121864   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:07.121912   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:07.131426   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:07.140637   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:07.140693   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:07.150305   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:07.160398   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:07.160464   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:07.170170   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:07.179684   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:07.179740   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:07.189036   57519 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:07.242553   57519 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 11:52:07.242606   57519 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:07.396088   57519 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:07.396232   57519 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:07.396391   57519 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:07.612882   57519 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:07.614941   57519 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:07.615033   57519 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:07.615124   57519 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:07.615249   57519 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:07.615353   57519 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:07.615465   57519 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:07.615543   57519 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:07.615641   57519 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:07.615726   57519 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:07.615815   57519 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:07.615915   57519 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:07.615970   57519 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:07.616047   57519 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:07.744190   57519 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:08.012140   57519 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 11:52:08.080751   57519 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:08.369179   57519 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:08.507095   57519 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:08.507736   57519 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:08.510271   57519 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:10.244270   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:10.244870   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:10.244909   58316 kubeadm.go:309] 
	I0520 11:52:10.245019   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:52:10.245121   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:52:10.245131   58316 kubeadm.go:309] 
	I0520 11:52:10.245213   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:52:10.245293   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:52:10.245550   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:52:10.245566   58316 kubeadm.go:309] 
	I0520 11:52:10.245819   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:52:10.245900   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:52:10.245977   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:52:10.245987   58316 kubeadm.go:309] 
	I0520 11:52:10.246241   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:52:10.246437   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:52:10.246450   58316 kubeadm.go:309] 
	I0520 11:52:10.246702   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:52:10.246915   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:52:10.247106   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:52:10.247366   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:52:10.247396   58316 kubeadm.go:309] 
	I0520 11:52:10.247656   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:10.247823   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:52:10.248110   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0520 11:52:10.248219   58316 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0520 11:52:10.248277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:10.736634   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:10.753534   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:10.766627   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:10.766653   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:10.766706   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:10.778676   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:10.778728   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:10.791073   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:10.801084   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:10.801142   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:10.811127   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.820319   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:10.820389   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.830024   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:10.839789   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:10.839850   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:10.849670   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:10.930808   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:52:10.930892   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:11.097490   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:11.097648   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:11.097813   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:11.296124   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:08.512322   57519 out.go:204]   - Booting up control plane ...
	I0520 11:52:08.512437   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:08.512530   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:08.512609   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:08.533828   57519 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:08.534843   57519 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:08.534903   57519 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:08.670084   57519 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 11:52:08.670192   57519 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 11:52:09.172058   57519 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.003696ms
	I0520 11:52:09.172181   57519 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 11:52:11.298250   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:11.298369   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:11.298456   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:11.300821   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:11.300961   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:11.301094   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:11.301178   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:11.301283   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:11.301374   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:11.301475   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:11.301589   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:11.301645   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:11.301723   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:11.441179   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:11.617104   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:11.918441   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:12.107514   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:12.132712   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:12.134501   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:12.134577   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:12.307559   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:14.174127   57519 kubeadm.go:309] [api-check] The API server is healthy after 5.002000599s
	I0520 11:52:14.189265   57519 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 11:52:14.707457   57519 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 11:52:14.731273   57519 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 11:52:14.731480   57519 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-814599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 11:52:14.746881   57519 kubeadm.go:309] [bootstrap-token] Using token: 3eyb61.mp2wh3ubkzpzouqm
	I0520 11:52:14.748381   57519 out.go:204]   - Configuring RBAC rules ...
	I0520 11:52:14.748539   57519 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 11:52:14.764426   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 11:52:14.773450   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 11:52:14.778770   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 11:52:14.782819   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 11:52:14.787255   57519 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 11:52:14.900560   57519 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 11:52:15.342284   57519 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 11:52:15.901553   57519 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 11:52:15.904310   57519 kubeadm.go:309] 
	I0520 11:52:15.904380   57519 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 11:52:15.904389   57519 kubeadm.go:309] 
	I0520 11:52:15.904507   57519 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 11:52:15.904545   57519 kubeadm.go:309] 
	I0520 11:52:15.904578   57519 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 11:52:15.904653   57519 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 11:52:15.904713   57519 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 11:52:15.904723   57519 kubeadm.go:309] 
	I0520 11:52:15.904813   57519 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 11:52:15.904824   57519 kubeadm.go:309] 
	I0520 11:52:15.904910   57519 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 11:52:15.904935   57519 kubeadm.go:309] 
	I0520 11:52:15.905012   57519 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 11:52:15.905113   57519 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 11:52:15.905218   57519 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 11:52:15.905239   57519 kubeadm.go:309] 
	I0520 11:52:15.905326   57519 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 11:52:15.905433   57519 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 11:52:15.905443   57519 kubeadm.go:309] 
	I0520 11:52:15.905534   57519 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3eyb61.mp2wh3ubkzpzouqm \
	I0520 11:52:15.905675   57519 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 11:52:15.905725   57519 kubeadm.go:309] 	--control-plane 
	I0520 11:52:15.905738   57519 kubeadm.go:309] 
	I0520 11:52:15.905851   57519 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 11:52:15.905862   57519 kubeadm.go:309] 
	I0520 11:52:15.905989   57519 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3eyb61.mp2wh3ubkzpzouqm \
	I0520 11:52:15.906131   57519 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 11:52:15.907657   57519 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:15.907690   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:52:15.907703   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:52:15.909664   57519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:52:15.911132   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:52:15.927634   57519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:52:15.946135   57519 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:52:15.946282   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:15.946298   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-814599 minikube.k8s.io/updated_at=2024_05_20T11_52_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=no-preload-814599 minikube.k8s.io/primary=true
	I0520 11:52:16.161431   57519 ops.go:34] apiserver oom_adj: -16
	I0520 11:52:16.166400   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:12.309392   58316 out.go:204]   - Booting up control plane ...
	I0520 11:52:12.309529   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:12.321163   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:12.322464   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:12.323504   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:12.326643   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:52:16.666687   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:17.167351   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:17.666800   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:18.167094   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:18.667154   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:19.167261   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:19.666780   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:20.167002   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:20.667078   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:21.167229   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:21.666511   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:22.167275   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:22.666512   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:23.166471   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:23.667393   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:24.167417   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:24.667387   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:25.167046   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:25.666728   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:26.167118   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:26.667450   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:27.166833   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:27.666776   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.167244   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.666741   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.758525   57519 kubeadm.go:1107] duration metric: took 12.81228323s to wait for elevateKubeSystemPrivileges
	W0520 11:52:28.758571   57519 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 11:52:28.758582   57519 kubeadm.go:393] duration metric: took 5m7.403869165s to StartCluster
	I0520 11:52:28.758599   57519 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:28.758679   57519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:52:28.760488   57519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:28.760723   57519 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:52:28.762410   57519 out.go:177] * Verifying Kubernetes components...
	I0520 11:52:28.760783   57519 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:52:28.760966   57519 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:52:28.763493   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:52:28.763499   57519 addons.go:69] Setting storage-provisioner=true in profile "no-preload-814599"
	I0520 11:52:28.763527   57519 addons.go:234] Setting addon storage-provisioner=true in "no-preload-814599"
	I0520 11:52:28.763528   57519 addons.go:69] Setting metrics-server=true in profile "no-preload-814599"
	W0520 11:52:28.763537   57519 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:52:28.763527   57519 addons.go:69] Setting default-storageclass=true in profile "no-preload-814599"
	I0520 11:52:28.763558   57519 addons.go:234] Setting addon metrics-server=true in "no-preload-814599"
	I0520 11:52:28.763562   57519 host.go:66] Checking if "no-preload-814599" exists ...
	W0520 11:52:28.763568   57519 addons.go:243] addon metrics-server should already be in state true
	I0520 11:52:28.763582   57519 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-814599"
	I0520 11:52:28.763589   57519 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:52:28.763934   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.763952   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.763984   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.764022   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.764047   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.764081   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.778949   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0520 11:52:28.778995   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0520 11:52:28.779481   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.779522   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.780023   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.780053   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.780159   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.780180   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.780498   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.780532   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.780680   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.780720   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0520 11:52:28.781084   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.781094   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.781127   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.781605   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.781627   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.781951   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.782608   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.782651   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.784981   57519 addons.go:234] Setting addon default-storageclass=true in "no-preload-814599"
	W0520 11:52:28.785010   57519 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:52:28.785041   57519 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:52:28.785399   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.785423   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.797463   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0520 11:52:28.797488   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I0520 11:52:28.797958   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.798050   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.798571   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.798589   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.798634   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.798647   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.799040   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.799043   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.799206   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.799250   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.802002   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.804188   57519 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:52:28.802940   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.805408   57519 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:52:28.804469   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0520 11:52:28.805363   57519 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:52:28.806586   57519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:52:28.806592   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:52:28.806601   57519 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:52:28.806607   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.806613   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.806965   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.807364   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.807375   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.807765   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.808210   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.808226   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.810837   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.810866   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811278   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.811300   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811477   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.811716   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.811753   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.811769   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811856   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.812027   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.812308   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.812494   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.812643   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.812795   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.824082   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I0520 11:52:28.824580   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.825117   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.825144   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.825492   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.825709   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.827343   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.827545   57519 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:52:28.827561   57519 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:52:28.827574   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.830623   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.831261   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.831287   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.831485   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.831625   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.831783   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.831912   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.984120   57519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:52:29.006910   57519 node_ready.go:35] waiting up to 6m0s for node "no-preload-814599" to be "Ready" ...
	I0520 11:52:29.016171   57519 node_ready.go:49] node "no-preload-814599" has status "Ready":"True"
	I0520 11:52:29.016197   57519 node_ready.go:38] duration metric: took 9.258398ms for node "no-preload-814599" to be "Ready" ...
	I0520 11:52:29.016207   57519 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:52:29.024085   57519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:29.092559   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:52:29.094377   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:52:29.094395   57519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:52:29.112741   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:52:29.133783   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:52:29.133804   57519 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:52:29.202727   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:52:29.202756   57519 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:52:29.347632   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:52:30.143576   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050979242s)
	I0520 11:52:30.143633   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.143645   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.143652   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.030877915s)
	I0520 11:52:30.143694   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.143706   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.143986   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.143960   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.144014   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144026   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.144037   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.144040   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144053   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.144061   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.144068   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.144045   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.144326   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144345   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.146044   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.146057   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.146068   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.182464   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.182492   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.182802   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.182817   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.557632   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.209948791s)
	I0520 11:52:30.557698   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.557715   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.557972   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.557992   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.557996   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.558011   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.558022   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.558252   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.558267   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.558277   57519 addons.go:470] Verifying addon metrics-server=true in "no-preload-814599"
	I0520 11:52:30.558302   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.560471   57519 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0520 11:52:30.561889   57519 addons.go:505] duration metric: took 1.801104362s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0520 11:52:31.030297   57519 pod_ready.go:102] pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace has status "Ready":"False"
	I0520 11:52:31.538984   57519 pod_ready.go:92] pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.539002   57519 pod_ready.go:81] duration metric: took 2.514888376s for pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.539011   57519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.544069   57519 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.544089   57519 pod_ready.go:81] duration metric: took 5.072004ms for pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.544097   57519 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.549299   57519 pod_ready.go:92] pod "etcd-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.549315   57519 pod_ready.go:81] duration metric: took 5.212738ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.549323   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.555210   57519 pod_ready.go:92] pod "kube-apiserver-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.555229   57519 pod_ready.go:81] duration metric: took 5.899536ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.555263   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.560438   57519 pod_ready.go:92] pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.560454   57519 pod_ready.go:81] duration metric: took 5.183454ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.560463   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wkfkc" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.928261   57519 pod_ready.go:92] pod "kube-proxy-wkfkc" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.928280   57519 pod_ready.go:81] duration metric: took 367.811223ms for pod "kube-proxy-wkfkc" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.928290   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:32.328564   57519 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:32.328589   57519 pod_ready.go:81] duration metric: took 400.292676ms for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:32.328598   57519 pod_ready.go:38] duration metric: took 3.31238177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:52:32.328612   57519 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:52:32.328672   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:52:32.346546   57519 api_server.go:72] duration metric: took 3.585793393s to wait for apiserver process to appear ...
	I0520 11:52:32.346575   57519 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:52:32.346598   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:52:32.351109   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:52:32.352044   57519 api_server.go:141] control plane version: v1.30.1
	I0520 11:52:32.352070   57519 api_server.go:131] duration metric: took 5.485989ms to wait for apiserver health ...
	I0520 11:52:32.352080   57519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:52:32.532017   57519 system_pods.go:59] 9 kube-system pods found
	I0520 11:52:32.532048   57519 system_pods.go:61] "coredns-7db6d8ff4d-8qdbk" [98bbe29a-5bd1-4568-ac29-5bf030c38f79] Running
	I0520 11:52:32.532053   57519 system_pods.go:61] "coredns-7db6d8ff4d-kbz7d" [d36efcc0-4f91-4c5d-8781-0bb5f22187ab] Running
	I0520 11:52:32.532056   57519 system_pods.go:61] "etcd-no-preload-814599" [d0685317-5464-4eb1-a81f-eca1a644f86a] Running
	I0520 11:52:32.532060   57519 system_pods.go:61] "kube-apiserver-no-preload-814599" [4a6f40fe-3c8f-4afc-a731-ccb80c5840ed] Running
	I0520 11:52:32.532064   57519 system_pods.go:61] "kube-controller-manager-no-preload-814599" [b6f3ac72-bd6f-421d-85f6-905a362be527] Running
	I0520 11:52:32.532067   57519 system_pods.go:61] "kube-proxy-wkfkc" [679b7143-7bff-47f3-ab69-cd87eec68013] Running
	I0520 11:52:32.532071   57519 system_pods.go:61] "kube-scheduler-no-preload-814599" [0e7d1cb0-2e44-4ae5-b7fd-e68f88f7fc04] Running
	I0520 11:52:32.532077   57519 system_pods.go:61] "metrics-server-569cc877fc-lggvs" [72eeb6ee-27ce-4485-b364-5c9b87283d29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:52:32.532081   57519 system_pods.go:61] "storage-provisioner" [1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b] Running
	I0520 11:52:32.532094   57519 system_pods.go:74] duration metric: took 180.008393ms to wait for pod list to return data ...
	I0520 11:52:32.532103   57519 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:52:32.729233   57519 default_sa.go:45] found service account: "default"
	I0520 11:52:32.729264   57519 default_sa.go:55] duration metric: took 197.154237ms for default service account to be created ...
	I0520 11:52:32.729278   57519 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:52:32.932686   57519 system_pods.go:86] 9 kube-system pods found
	I0520 11:52:32.932717   57519 system_pods.go:89] "coredns-7db6d8ff4d-8qdbk" [98bbe29a-5bd1-4568-ac29-5bf030c38f79] Running
	I0520 11:52:32.932764   57519 system_pods.go:89] "coredns-7db6d8ff4d-kbz7d" [d36efcc0-4f91-4c5d-8781-0bb5f22187ab] Running
	I0520 11:52:32.932777   57519 system_pods.go:89] "etcd-no-preload-814599" [d0685317-5464-4eb1-a81f-eca1a644f86a] Running
	I0520 11:52:32.932786   57519 system_pods.go:89] "kube-apiserver-no-preload-814599" [4a6f40fe-3c8f-4afc-a731-ccb80c5840ed] Running
	I0520 11:52:32.932796   57519 system_pods.go:89] "kube-controller-manager-no-preload-814599" [b6f3ac72-bd6f-421d-85f6-905a362be527] Running
	I0520 11:52:32.932805   57519 system_pods.go:89] "kube-proxy-wkfkc" [679b7143-7bff-47f3-ab69-cd87eec68013] Running
	I0520 11:52:32.932812   57519 system_pods.go:89] "kube-scheduler-no-preload-814599" [0e7d1cb0-2e44-4ae5-b7fd-e68f88f7fc04] Running
	I0520 11:52:32.932824   57519 system_pods.go:89] "metrics-server-569cc877fc-lggvs" [72eeb6ee-27ce-4485-b364-5c9b87283d29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:52:32.932835   57519 system_pods.go:89] "storage-provisioner" [1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b] Running
	I0520 11:52:32.932854   57519 system_pods.go:126] duration metric: took 203.569018ms to wait for k8s-apps to be running ...
	I0520 11:52:32.932867   57519 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:52:32.932945   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:32.948983   57519 system_svc.go:56] duration metric: took 16.10841ms WaitForService to wait for kubelet
	I0520 11:52:32.949014   57519 kubeadm.go:576] duration metric: took 4.188266242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:52:32.949035   57519 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:52:33.128478   57519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:52:33.128503   57519 node_conditions.go:123] node cpu capacity is 2
	I0520 11:52:33.128514   57519 node_conditions.go:105] duration metric: took 179.473758ms to run NodePressure ...
	I0520 11:52:33.128527   57519 start.go:240] waiting for startup goroutines ...
	I0520 11:52:33.128536   57519 start.go:245] waiting for cluster config update ...
	I0520 11:52:33.128548   57519 start.go:254] writing updated cluster config ...
	I0520 11:52:33.128860   57519 ssh_runner.go:195] Run: rm -f paused
	I0520 11:52:33.177360   57519 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:52:33.179409   57519 out.go:177] * Done! kubectl is now configured to use "no-preload-814599" cluster and "default" namespace by default
	I0520 11:52:52.328753   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:52:52.329026   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:52.329268   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:57.329722   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:57.329960   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:07.330334   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:07.330605   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:27.331241   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:27.331509   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330498   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:54:07.330756   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330768   58316 kubeadm.go:309] 
	I0520 11:54:07.330813   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:54:07.330868   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:54:07.330878   58316 kubeadm.go:309] 
	I0520 11:54:07.330924   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:54:07.330980   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:54:07.331142   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:54:07.331178   58316 kubeadm.go:309] 
	I0520 11:54:07.331298   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:54:07.331354   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:54:07.331409   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:54:07.331419   58316 kubeadm.go:309] 
	I0520 11:54:07.331564   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:54:07.331675   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:54:07.331685   58316 kubeadm.go:309] 
	I0520 11:54:07.331842   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:54:07.331971   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:54:07.332096   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:54:07.332190   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:54:07.332204   58316 kubeadm.go:309] 
	I0520 11:54:07.332486   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:54:07.332595   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:54:07.332685   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 11:54:07.332755   58316 kubeadm.go:393] duration metric: took 7m58.459127604s to StartCluster
	I0520 11:54:07.332796   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:54:07.332870   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:54:07.378602   58316 cri.go:89] found id: ""
	I0520 11:54:07.378631   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.378641   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:54:07.378648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:54:07.378714   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:54:07.415695   58316 cri.go:89] found id: ""
	I0520 11:54:07.415724   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.415732   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:54:07.415739   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:54:07.415802   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:54:07.456681   58316 cri.go:89] found id: ""
	I0520 11:54:07.456706   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.456717   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:54:07.456726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:54:07.456781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:54:07.495834   58316 cri.go:89] found id: ""
	I0520 11:54:07.495870   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.495881   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:54:07.495889   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:54:07.495949   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:54:07.533674   58316 cri.go:89] found id: ""
	I0520 11:54:07.533698   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.533705   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:54:07.533710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:54:07.533759   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:54:07.571118   58316 cri.go:89] found id: ""
	I0520 11:54:07.571141   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.571148   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:54:07.571155   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:54:07.571214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:54:07.609066   58316 cri.go:89] found id: ""
	I0520 11:54:07.609091   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.609105   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:54:07.609114   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:54:07.609176   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:54:07.653029   58316 cri.go:89] found id: ""
	I0520 11:54:07.653057   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.653068   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:54:07.653080   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:54:07.653095   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:54:07.707497   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:54:07.707533   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:54:07.728740   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:54:07.728772   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:54:07.809062   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:54:07.809088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:54:07.809104   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:54:07.911717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:54:07.911762   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0520 11:54:07.952747   58316 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0520 11:54:07.952792   58316 out.go:239] * 
	W0520 11:54:07.952855   58316 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.952881   58316 out.go:239] * 
	W0520 11:54:07.954044   58316 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:54:07.957368   58316 out.go:177] 
	W0520 11:54:07.958433   58316 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.958491   58316 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0520 11:54:07.958518   58316 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0520 11:54:07.959662   58316 out.go:177] 
	
	
	==> CRI-O <==
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.154090596Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206402154057869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94eb4534-f432-47d0-a22d-11c3eaf71d35 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.154864768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=241bd032-7480-4a28-ac05-0532af165bde name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.154917181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=241bd032-7480-4a28-ac05-0532af165bde name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.155116982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205626603993284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a564c207edf007abdb4a4af2059a85e47e67255c74fa45f59d4e1ef3805289,PodSandboxId:cd8a8ba27b28171bb60676422bfc26f448e5b022eb4233cc5872976c933894f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205606403280848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 493596c1-61a7-4cb2-9871-f7152e3d2d97,},Annotations:map[string]string{io.kubernetes.container.hash: de174415,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f,PodSandboxId:b93e6c366096581c0e253b3bbf21c1544d3f493293e287ce2bc3bc487eabdb56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205603462656840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-64txr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e755d2ac-2e0d-4088-bf96-9d8aadd69987,},Annotations:map[string]string{io.kubernetes.container.hash: dec2baf2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205595798329664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de,PodSandboxId:da859d16cd2d40491f0508f0055e6eb27f9cc3b3f6bd03f3cfb84fd31e8a94b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205595792199625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6n9d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e07292-bd7f-446a-aa93-3975a4860
ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 24ff0539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577,PodSandboxId:ad71b186b0ca74863c549b872fb564a0c06b63d8348c4e08b7eba9685b263e58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205591147769591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90d757e37a7badc927e5ba58ab24c5e,},Annotations:map[string]string{io.kub
ernetes.container.hash: 100e1eb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d,PodSandboxId:6f6f895fe620b85a0d0ec1cc4a7b0424465dc419c8eeb19378391d704bbd6115,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205591058040879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6c682a551e56a2e6ede4d4686339acf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39,PodSandboxId:0a6bf6121a3b0174806151b497ed94fdbe325af8d88d24f8e3e7a8ef06d23ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205591013047206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9970b56a3c9345e43460a2c31682f14c,},Annotations:map[string]string{io.
kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677,PodSandboxId:92d237b96e279308e9a5d44f47a4d16b5fe0184f91e1e60bfa45cef6692a0314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205591031226925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f0776ab0ee09fc201405c54f684ae9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: df9f78ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=241bd032-7480-4a28-ac05-0532af165bde name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.195983994Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f9fd9f8-a9d6-406f-b886-8be03f5fb5b7 name=/runtime.v1.RuntimeService/Version
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.196060559Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f9fd9f8-a9d6-406f-b886-8be03f5fb5b7 name=/runtime.v1.RuntimeService/Version
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.197298406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2864c97-ac19-423d-aec3-fa1d7d123c42 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.197786057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206402197751797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2864c97-ac19-423d-aec3-fa1d7d123c42 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.198360900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2623b330-6ebc-4548-887e-23ff85c34380 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.198462852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2623b330-6ebc-4548-887e-23ff85c34380 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.198654872Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205626603993284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a564c207edf007abdb4a4af2059a85e47e67255c74fa45f59d4e1ef3805289,PodSandboxId:cd8a8ba27b28171bb60676422bfc26f448e5b022eb4233cc5872976c933894f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205606403280848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 493596c1-61a7-4cb2-9871-f7152e3d2d97,},Annotations:map[string]string{io.kubernetes.container.hash: de174415,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f,PodSandboxId:b93e6c366096581c0e253b3bbf21c1544d3f493293e287ce2bc3bc487eabdb56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205603462656840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-64txr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e755d2ac-2e0d-4088-bf96-9d8aadd69987,},Annotations:map[string]string{io.kubernetes.container.hash: dec2baf2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205595798329664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de,PodSandboxId:da859d16cd2d40491f0508f0055e6eb27f9cc3b3f6bd03f3cfb84fd31e8a94b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205595792199625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6n9d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e07292-bd7f-446a-aa93-3975a4860
ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 24ff0539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577,PodSandboxId:ad71b186b0ca74863c549b872fb564a0c06b63d8348c4e08b7eba9685b263e58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205591147769591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90d757e37a7badc927e5ba58ab24c5e,},Annotations:map[string]string{io.kub
ernetes.container.hash: 100e1eb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d,PodSandboxId:6f6f895fe620b85a0d0ec1cc4a7b0424465dc419c8eeb19378391d704bbd6115,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205591058040879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6c682a551e56a2e6ede4d4686339acf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39,PodSandboxId:0a6bf6121a3b0174806151b497ed94fdbe325af8d88d24f8e3e7a8ef06d23ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205591013047206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9970b56a3c9345e43460a2c31682f14c,},Annotations:map[string]string{io.
kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677,PodSandboxId:92d237b96e279308e9a5d44f47a4d16b5fe0184f91e1e60bfa45cef6692a0314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205591031226925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f0776ab0ee09fc201405c54f684ae9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: df9f78ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2623b330-6ebc-4548-887e-23ff85c34380 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.240092491Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fe553b2-5211-40ea-ae68-072e2b65a5f2 name=/runtime.v1.RuntimeService/Version
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.240194490Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fe553b2-5211-40ea-ae68-072e2b65a5f2 name=/runtime.v1.RuntimeService/Version
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.241459575Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ca509d4-10c2-4760-b2a5-fd53545630f0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.241830611Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206402241810036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ca509d4-10c2-4760-b2a5-fd53545630f0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.242472406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85b7b511-5c56-444e-80ce-36de996d2b2e name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.242528762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85b7b511-5c56-444e-80ce-36de996d2b2e name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.242718613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205626603993284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a564c207edf007abdb4a4af2059a85e47e67255c74fa45f59d4e1ef3805289,PodSandboxId:cd8a8ba27b28171bb60676422bfc26f448e5b022eb4233cc5872976c933894f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205606403280848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 493596c1-61a7-4cb2-9871-f7152e3d2d97,},Annotations:map[string]string{io.kubernetes.container.hash: de174415,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f,PodSandboxId:b93e6c366096581c0e253b3bbf21c1544d3f493293e287ce2bc3bc487eabdb56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205603462656840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-64txr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e755d2ac-2e0d-4088-bf96-9d8aadd69987,},Annotations:map[string]string{io.kubernetes.container.hash: dec2baf2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205595798329664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de,PodSandboxId:da859d16cd2d40491f0508f0055e6eb27f9cc3b3f6bd03f3cfb84fd31e8a94b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205595792199625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6n9d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e07292-bd7f-446a-aa93-3975a4860
ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 24ff0539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577,PodSandboxId:ad71b186b0ca74863c549b872fb564a0c06b63d8348c4e08b7eba9685b263e58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205591147769591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90d757e37a7badc927e5ba58ab24c5e,},Annotations:map[string]string{io.kub
ernetes.container.hash: 100e1eb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d,PodSandboxId:6f6f895fe620b85a0d0ec1cc4a7b0424465dc419c8eeb19378391d704bbd6115,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205591058040879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6c682a551e56a2e6ede4d4686339acf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39,PodSandboxId:0a6bf6121a3b0174806151b497ed94fdbe325af8d88d24f8e3e7a8ef06d23ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205591013047206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9970b56a3c9345e43460a2c31682f14c,},Annotations:map[string]string{io.
kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677,PodSandboxId:92d237b96e279308e9a5d44f47a4d16b5fe0184f91e1e60bfa45cef6692a0314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205591031226925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f0776ab0ee09fc201405c54f684ae9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: df9f78ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85b7b511-5c56-444e-80ce-36de996d2b2e name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.279182112Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b69a1b4-c79d-4471-94ff-5167681f63b0 name=/runtime.v1.RuntimeService/Version
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.279307186Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b69a1b4-c79d-4471-94ff-5167681f63b0 name=/runtime.v1.RuntimeService/Version
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.281340756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ecd910a5-a3f5-47bd-a017-095290ca9edc name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.282065274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206402281998534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecd910a5-a3f5-47bd-a017-095290ca9edc name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.282831275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c04d750d-c8f9-4217-aa56-4cde6e19ba58 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.282939232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c04d750d-c8f9-4217-aa56-4cde6e19ba58 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:02 embed-certs-274976 crio[728]: time="2024-05-20 12:00:02.283208534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205626603993284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a564c207edf007abdb4a4af2059a85e47e67255c74fa45f59d4e1ef3805289,PodSandboxId:cd8a8ba27b28171bb60676422bfc26f448e5b022eb4233cc5872976c933894f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205606403280848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 493596c1-61a7-4cb2-9871-f7152e3d2d97,},Annotations:map[string]string{io.kubernetes.container.hash: de174415,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f,PodSandboxId:b93e6c366096581c0e253b3bbf21c1544d3f493293e287ce2bc3bc487eabdb56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205603462656840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-64txr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e755d2ac-2e0d-4088-bf96-9d8aadd69987,},Annotations:map[string]string{io.kubernetes.container.hash: dec2baf2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205595798329664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de,PodSandboxId:da859d16cd2d40491f0508f0055e6eb27f9cc3b3f6bd03f3cfb84fd31e8a94b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205595792199625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6n9d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e07292-bd7f-446a-aa93-3975a4860
ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 24ff0539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577,PodSandboxId:ad71b186b0ca74863c549b872fb564a0c06b63d8348c4e08b7eba9685b263e58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205591147769591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90d757e37a7badc927e5ba58ab24c5e,},Annotations:map[string]string{io.kub
ernetes.container.hash: 100e1eb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d,PodSandboxId:6f6f895fe620b85a0d0ec1cc4a7b0424465dc419c8eeb19378391d704bbd6115,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205591058040879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6c682a551e56a2e6ede4d4686339acf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39,PodSandboxId:0a6bf6121a3b0174806151b497ed94fdbe325af8d88d24f8e3e7a8ef06d23ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205591013047206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9970b56a3c9345e43460a2c31682f14c,},Annotations:map[string]string{io.
kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677,PodSandboxId:92d237b96e279308e9a5d44f47a4d16b5fe0184f91e1e60bfa45cef6692a0314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205591031226925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f0776ab0ee09fc201405c54f684ae9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: df9f78ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c04d750d-c8f9-4217-aa56-4cde6e19ba58 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aa00e0ad2982a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   7581a88a8cf1d       storage-provisioner
	f5a564c207edf       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   cd8a8ba27b281       busybox
	618ca565a4457       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   b93e6c3660965       coredns-7db6d8ff4d-64txr
	d0a1a121eec8a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   7581a88a8cf1d       storage-provisioner
	11f7139d84207       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago      Running             kube-proxy                1                   da859d16cd2d4       kube-proxy-6n9d5
	e3ebf27b170f9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   ad71b186b0ca7       etcd-embed-certs-274976
	c19fee6b7c2fa       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago      Running             kube-scheduler            1                   6f6f895fe620b       kube-scheduler-embed-certs-274976
	4bac0296aa85f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      13 minutes ago      Running             kube-apiserver            1                   92d237b96e279       kube-apiserver-embed-certs-274976
	f8d4797eeb87b       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      13 minutes ago      Running             kube-controller-manager   1                   0a6bf6121a3b0       kube-controller-manager-embed-certs-274976
	
	
	==> coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46534 - 38642 "HINFO IN 792029033212554799.8824431450542217466. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01370272s
	
	
	==> describe nodes <==
	Name:               embed-certs-274976
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-274976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=embed-certs-274976
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T11_39_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:39:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-274976
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:00:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:57:16 +0000   Mon, 20 May 2024 11:39:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:57:16 +0000   Mon, 20 May 2024 11:39:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:57:16 +0000   Mon, 20 May 2024 11:39:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:57:16 +0000   Mon, 20 May 2024 11:46:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.167
	  Hostname:    embed-certs-274976
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e25c7aa652ce41ee982539bff8a01180
	  System UUID:                e25c7aa6-52ce-41ee-9825-39bff8a01180
	  Boot ID:                    60e737e0-598a-439c-9784-642b1f4d46f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-64txr                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-embed-certs-274976                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-embed-certs-274976             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-embed-certs-274976    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-6n9d5                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-embed-certs-274976             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-569cc877fc-ll4n8               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-274976 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-274976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-274976 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node embed-certs-274976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node embed-certs-274976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node embed-certs-274976 status is now: NodeHasSufficientPID
	  Normal  NodeReady                20m                kubelet          Node embed-certs-274976 status is now: NodeReady
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-274976 event: Registered Node embed-certs-274976 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-274976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-274976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-274976 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-274976 event: Registered Node embed-certs-274976 in Controller
	
	
	==> dmesg <==
	[May20 11:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055084] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043332] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.714849] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.099291] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609851] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000034] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.607833] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.057057] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068624] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.186228] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.157335] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.303954] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.505438] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.062932] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.229661] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +5.596494] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.932714] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +4.630424] kauditd_printk_skb: 78 callbacks suppressed
	[May20 11:47] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] <==
	{"level":"info","ts":"2024-05-20T11:46:31.65234Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.167:2380"}
	{"level":"info","ts":"2024-05-20T11:46:31.652368Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.167:2380"}
	{"level":"info","ts":"2024-05-20T11:46:32.961916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T11:46:32.962064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T11:46:32.962154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 received MsgPreVoteResp from 7664cfa1ff0dacf1 at term 2"}
	{"level":"info","ts":"2024-05-20T11:46:32.962207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T11:46:32.96224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 received MsgVoteResp from 7664cfa1ff0dacf1 at term 3"}
	{"level":"info","ts":"2024-05-20T11:46:32.962277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7664cfa1ff0dacf1 became leader at term 3"}
	{"level":"info","ts":"2024-05-20T11:46:32.962316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7664cfa1ff0dacf1 elected leader 7664cfa1ff0dacf1 at term 3"}
	{"level":"info","ts":"2024-05-20T11:46:32.964193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:46:32.964076Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7664cfa1ff0dacf1","local-member-attributes":"{Name:embed-certs-274976 ClientURLs:[https://192.168.50.167:2379]}","request-path":"/0/members/7664cfa1ff0dacf1/attributes","cluster-id":"d7e89ab1d6ffbfaa","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T11:46:32.965651Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:46:32.965852Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T11:46:32.965883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T11:46:32.966473Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.167:2379"}
	{"level":"info","ts":"2024-05-20T11:46:32.96751Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-05-20T11:46:51.362216Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.754808ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12461899517565602721 > lease_revoke:<id:2cf18f95d31c0a25>","response":"size:28"}
	{"level":"warn","ts":"2024-05-20T11:46:51.362735Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.176494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T11:46:51.362807Z","caller":"traceutil/trace.go:171","msg":"trace[670452299] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:580; }","duration":"237.303273ms","start":"2024-05-20T11:46:51.125494Z","end":"2024-05-20T11:46:51.362797Z","steps":["trace[670452299] 'agreement among raft nodes before linearized reading'  (duration: 237.176241ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:46:51.362527Z","caller":"traceutil/trace.go:171","msg":"trace[1291265025] linearizableReadLoop","detail":"{readStateIndex:616; appliedIndex:615; }","duration":"236.984464ms","start":"2024-05-20T11:46:51.125525Z","end":"2024-05-20T11:46:51.36251Z","steps":["trace[1291265025] 'read index received'  (duration: 29.37µs)","trace[1291265025] 'applied index is now lower than readState.Index'  (duration: 236.95378ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T11:47:12.879672Z","caller":"traceutil/trace.go:171","msg":"trace[1176906875] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"108.759041ms","start":"2024-05-20T11:47:12.770878Z","end":"2024-05-20T11:47:12.879637Z","steps":["trace[1176906875] 'process raft request'  (duration: 108.393866ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:47:15.542237Z","caller":"traceutil/trace.go:171","msg":"trace[1535775734] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"142.425198ms","start":"2024-05-20T11:47:15.399765Z","end":"2024-05-20T11:47:15.54219Z","steps":["trace[1535775734] 'process raft request'  (duration: 142.265358ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:56:32.99685Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":832}
	{"level":"info","ts":"2024-05-20T11:56:33.006299Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":832,"took":"9.080349ms","hash":2871989158,"current-db-size-bytes":2146304,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2146304,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-20T11:56:33.006361Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2871989158,"revision":832,"compact-revision":-1}
	
	
	==> kernel <==
	 12:00:02 up 13 min,  0 users,  load average: 0.06, 0.09, 0.08
	Linux embed-certs-274976 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] <==
	I0520 11:54:35.382362       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:56:34.383566       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:56:34.383709       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0520 11:56:35.384489       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:56:35.384591       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 11:56:35.384629       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:56:35.384490       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:56:35.384737       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 11:56:35.386759       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:57:35.385611       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:57:35.385697       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 11:57:35.385705       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:57:35.387883       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:57:35.388207       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 11:57:35.388323       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:59:35.385896       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:59:35.386013       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 11:59:35.386025       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:59:35.389519       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:59:35.389622       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 11:59:35.389632       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] <==
	I0520 11:54:17.590369       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:54:47.114927       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:54:47.597863       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:55:17.121325       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:55:17.606105       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:55:47.128854       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:55:47.613082       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:56:17.132802       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:56:17.624019       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:56:47.137042       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:56:47.632937       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:57:17.144088       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:57:17.641503       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0520 11:57:35.410369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="276.735µs"
	I0520 11:57:46.411115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="51.178µs"
	E0520 11:57:47.148647       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:57:47.649382       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:58:17.153493       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:58:17.659811       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:58:47.157862       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:58:47.668091       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:59:17.163814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:59:17.675733       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:59:47.170158       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:59:47.683331       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] <==
	I0520 11:46:36.023136       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:46:36.034520       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.167"]
	I0520 11:46:36.124627       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:46:36.124983       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:46:36.125336       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:46:36.130154       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:46:36.130573       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:46:36.131194       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:46:36.132254       1 config.go:192] "Starting service config controller"
	I0520 11:46:36.132285       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:46:36.132324       1 config.go:319] "Starting node config controller"
	I0520 11:46:36.132328       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:46:36.133098       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:46:36.133173       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:46:36.233050       1 shared_informer.go:320] Caches are synced for node config
	I0520 11:46:36.233094       1 shared_informer.go:320] Caches are synced for service config
	I0520 11:46:36.234849       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] <==
	I0520 11:46:34.373159       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 11:46:34.373224       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:46:34.381489       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 11:46:34.381537       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:46:34.382470       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 11:46:34.382567       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	W0520 11:46:34.386804       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:46:34.386903       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 11:46:34.391187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0520 11:46:34.391245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0520 11:46:34.397584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0520 11:46:34.397639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0520 11:46:34.397708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0520 11:46:34.397748       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0520 11:46:34.397830       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0520 11:46:34.397859       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0520 11:46:34.398084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0520 11:46:34.398134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0520 11:46:34.398203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0520 11:46:34.398245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0520 11:46:34.398346       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0520 11:46:34.398441       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0520 11:46:34.398524       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0520 11:46:34.398561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	I0520 11:46:34.481766       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 11:57:30 embed-certs-274976 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:57:30 embed-certs-274976 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:57:35 embed-certs-274976 kubelet[940]: E0520 11:57:35.394892     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 11:57:46 embed-certs-274976 kubelet[940]: E0520 11:57:46.396863     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 11:57:57 embed-certs-274976 kubelet[940]: E0520 11:57:57.394466     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 11:58:08 embed-certs-274976 kubelet[940]: E0520 11:58:08.395642     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 11:58:20 embed-certs-274976 kubelet[940]: E0520 11:58:20.394495     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 11:58:30 embed-certs-274976 kubelet[940]: E0520 11:58:30.410023     940 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:58:30 embed-certs-274976 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:58:30 embed-certs-274976 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:58:30 embed-certs-274976 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:58:30 embed-certs-274976 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:58:31 embed-certs-274976 kubelet[940]: E0520 11:58:31.394131     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 11:58:42 embed-certs-274976 kubelet[940]: E0520 11:58:42.394816     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 11:58:54 embed-certs-274976 kubelet[940]: E0520 11:58:54.393991     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 11:59:09 embed-certs-274976 kubelet[940]: E0520 11:59:09.394658     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 11:59:21 embed-certs-274976 kubelet[940]: E0520 11:59:21.395131     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 11:59:30 embed-certs-274976 kubelet[940]: E0520 11:59:30.410684     940 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:59:30 embed-certs-274976 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:59:30 embed-certs-274976 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:59:30 embed-certs-274976 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:59:30 embed-certs-274976 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:59:32 embed-certs-274976 kubelet[940]: E0520 11:59:32.393975     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 11:59:46 embed-certs-274976 kubelet[940]: E0520 11:59:46.396154     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 12:00:01 embed-certs-274976 kubelet[940]: E0520 12:00:01.394848     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	
	
	==> storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] <==
	I0520 11:47:06.701610       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 11:47:06.714580       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 11:47:06.714672       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 11:47:06.727779       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 11:47:06.727922       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-274976_9a14e2dd-e833-40f7-968d-10ce61fe7437!
	I0520 11:47:06.728228       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2532f4d0-0cad-4897-ae61-4ccb20239f7d", APIVersion:"v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-274976_9a14e2dd-e833-40f7-968d-10ce61fe7437 became leader
	I0520 11:47:06.828665       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-274976_9a14e2dd-e833-40f7-968d-10ce61fe7437!
	
	
	==> storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] <==
	I0520 11:46:35.963255       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0520 11:47:05.972247       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-274976 -n embed-certs-274976
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-274976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-ll4n8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-274976 describe pod metrics-server-569cc877fc-ll4n8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-274976 describe pod metrics-server-569cc877fc-ll4n8: exit status 1 (62.66315ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-ll4n8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-274976 describe pod metrics-server-569cc877fc-ll4n8: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0520 11:52:19.191555   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-05-20 12:00:24.53379616 +0000 UTC m=+5994.245678986
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-668445 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-668445 logs -n 25: (2.099006533s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-854547                           | kubernetes-upgrade-854547    | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-814599             | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-120159                                        | pause-120159                 | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:39 UTC |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-793579 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | disable-driver-mounts-793579                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:41 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274976            | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-814599                  | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210420        | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-668445  | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC | 20 May 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274976                 | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210420             | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-668445       | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:44 UTC | 20 May 24 11:51 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:44:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:44:01.940145   58840 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:44:01.940398   58840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:44:01.940409   58840 out.go:304] Setting ErrFile to fd 2...
	I0520 11:44:01.940415   58840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:44:01.940633   58840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:44:01.941201   58840 out.go:298] Setting JSON to false
	I0520 11:44:01.942056   58840 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5185,"bootTime":1716200257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:44:01.942118   58840 start.go:139] virtualization: kvm guest
	I0520 11:44:01.944331   58840 out.go:177] * [default-k8s-diff-port-668445] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:44:01.945948   58840 notify.go:220] Checking for updates...
	I0520 11:44:01.945956   58840 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:44:01.947257   58840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:44:01.948525   58840 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:44:01.949881   58840 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:44:01.951156   58840 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:44:01.952736   58840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:44:01.954565   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:44:01.954932   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:44:01.954979   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:44:01.969623   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0520 11:44:01.969984   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:44:01.970444   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:44:01.970489   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:44:01.970816   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:44:01.971013   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:44:01.971212   58840 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:44:01.971493   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:44:01.971540   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:44:01.985556   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0520 11:44:01.985885   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:44:01.986365   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:44:01.986388   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:44:01.986707   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:44:01.986883   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:44:02.021778   58840 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 11:44:02.022944   58840 start.go:297] selected driver: kvm2
	I0520 11:44:02.022954   58840 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:44:02.023042   58840 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:44:02.023717   58840 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:44:02.023796   58840 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:44:02.038516   58840 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:44:02.038900   58840 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:44:02.038931   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:44:02.038942   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:44:02.038989   58840 start.go:340] cluster config:
	{Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:44:02.039097   58840 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:44:02.041645   58840 out.go:177] * Starting "default-k8s-diff-port-668445" primary control-plane node in "default-k8s-diff-port-668445" cluster
	I0520 11:44:02.042785   58840 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:44:02.042826   58840 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 11:44:02.042839   58840 cache.go:56] Caching tarball of preloaded images
	I0520 11:44:02.042917   58840 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:44:02.042929   58840 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 11:44:02.043041   58840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:44:02.043267   58840 start.go:360] acquireMachinesLock for default-k8s-diff-port-668445: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:44:06.605177   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:09.677144   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:15.757188   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:18.829221   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:24.909165   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:27.981180   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:34.061180   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:37.133203   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:43.213225   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:46.285226   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:52.365216   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:55.437121   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:01.517184   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:04.589206   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:10.669220   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:13.741264   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:19.821209   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:22.893127   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:28.973155   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:32.045217   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:41.129769   58316 start.go:364] duration metric: took 3m9.329208681s to acquireMachinesLock for "old-k8s-version-210420"
	I0520 11:45:41.129841   58316 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:45:41.129847   58316 fix.go:54] fixHost starting: 
	I0520 11:45:41.130178   58316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:45:41.130209   58316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:45:41.145527   58316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0520 11:45:41.145974   58316 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:45:41.146470   58316 main.go:141] libmachine: Using API Version  1
	I0520 11:45:41.146491   58316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:45:41.146803   58316 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:45:41.146962   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:41.147096   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetState
	I0520 11:45:41.148653   58316 fix.go:112] recreateIfNeeded on old-k8s-version-210420: state=Stopped err=<nil>
	I0520 11:45:41.148678   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	W0520 11:45:41.148835   58316 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:45:41.150515   58316 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210420" ...
	I0520 11:45:38.125195   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:41.127376   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:41.127411   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:45:41.127706   57519 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:45:41.127739   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:45:41.127954   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:45:41.129638   57519 machine.go:97] duration metric: took 4m44.663213682s to provisionDockerMachine
	I0520 11:45:41.129681   57519 fix.go:56] duration metric: took 4m44.685919857s for fixHost
	I0520 11:45:41.129692   57519 start.go:83] releasing machines lock for "no-preload-814599", held for 4m44.68594997s
	W0520 11:45:41.129717   57519 start.go:713] error starting host: provision: host is not running
	W0520 11:45:41.129807   57519 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0520 11:45:41.129817   57519 start.go:728] Will try again in 5 seconds ...
	I0520 11:45:41.151697   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .Start
	I0520 11:45:41.151882   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring networks are active...
	I0520 11:45:41.152702   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network default is active
	I0520 11:45:41.153047   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network mk-old-k8s-version-210420 is active
	I0520 11:45:41.153452   58316 main.go:141] libmachine: (old-k8s-version-210420) Getting domain xml...
	I0520 11:45:41.154219   58316 main.go:141] libmachine: (old-k8s-version-210420) Creating domain...
	I0520 11:45:46.134344   57519 start.go:360] acquireMachinesLock for no-preload-814599: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:45:42.344294   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting to get IP...
	I0520 11:45:42.345102   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.345496   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.345567   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.345490   59217 retry.go:31] will retry after 272.13676ms: waiting for machine to come up
	I0520 11:45:42.619253   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.619676   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.619706   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.619633   59217 retry.go:31] will retry after 374.178023ms: waiting for machine to come up
	I0520 11:45:42.995376   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.995793   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.995819   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.995748   59217 retry.go:31] will retry after 462.060487ms: waiting for machine to come up
	I0520 11:45:43.459411   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.459824   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.459860   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.459791   59217 retry.go:31] will retry after 479.382552ms: waiting for machine to come up
	I0520 11:45:43.940363   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.940850   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.940873   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.940802   59217 retry.go:31] will retry after 633.680965ms: waiting for machine to come up
	I0520 11:45:44.575531   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:44.575957   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:44.575985   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:44.575915   59217 retry.go:31] will retry after 755.049677ms: waiting for machine to come up
	I0520 11:45:45.332864   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:45.333335   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:45.333368   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:45.333292   59217 retry.go:31] will retry after 757.292563ms: waiting for machine to come up
	I0520 11:45:46.092128   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:46.092583   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:46.092605   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:46.092544   59217 retry.go:31] will retry after 1.150182695s: waiting for machine to come up
	I0520 11:45:47.244071   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:47.244526   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:47.244588   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:47.244474   59217 retry.go:31] will retry after 1.316447926s: waiting for machine to come up
	I0520 11:45:48.562914   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:48.563296   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:48.563326   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:48.563240   59217 retry.go:31] will retry after 1.615920757s: waiting for machine to come up
	I0520 11:45:50.180999   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:50.181361   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:50.181395   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:50.181336   59217 retry.go:31] will retry after 2.441112889s: waiting for machine to come up
	I0520 11:45:52.624374   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:52.624739   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:52.624771   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:52.624722   59217 retry.go:31] will retry after 2.505225423s: waiting for machine to come up
	I0520 11:45:55.133339   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:55.133626   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:55.133655   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:55.133596   59217 retry.go:31] will retry after 4.197262735s: waiting for machine to come up
	I0520 11:46:00.745820   58398 start.go:364] duration metric: took 3m22.67445444s to acquireMachinesLock for "embed-certs-274976"
	I0520 11:46:00.745877   58398 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:00.745887   58398 fix.go:54] fixHost starting: 
	I0520 11:46:00.746287   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:00.746320   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:00.762678   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45595
	I0520 11:46:00.763132   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:00.763654   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:00.763681   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:00.763991   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:00.764154   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:00.764308   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:00.765826   58398 fix.go:112] recreateIfNeeded on embed-certs-274976: state=Stopped err=<nil>
	I0520 11:46:00.765861   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	W0520 11:46:00.766017   58398 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:00.768362   58398 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274976" ...
	I0520 11:45:59.334216   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334678   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has current primary IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334700   58316 main.go:141] libmachine: (old-k8s-version-210420) Found IP for machine: 192.168.83.100
	I0520 11:45:59.334712   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserving static IP address...
	I0520 11:45:59.335134   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.335166   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | skip adding static IP to network mk-old-k8s-version-210420 - found existing host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"}
	I0520 11:45:59.335180   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserved static IP address: 192.168.83.100
	I0520 11:45:59.335197   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting for SSH to be available...
	I0520 11:45:59.335212   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Getting to WaitForSSH function...
	I0520 11:45:59.337206   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337476   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.337501   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337634   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH client type: external
	I0520 11:45:59.337681   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa (-rw-------)
	I0520 11:45:59.337716   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:45:59.337732   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | About to run SSH command:
	I0520 11:45:59.337743   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | exit 0
	I0520 11:45:59.460960   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | SSH cmd err, output: <nil>: 
	I0520 11:45:59.461347   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:45:59.461978   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.464416   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.464734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.464756   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.465018   58316 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:45:59.465183   58316 machine.go:94] provisionDockerMachine start ...
	I0520 11:45:59.465201   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:59.465416   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.467457   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467751   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.467782   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467892   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.468050   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468204   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468330   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.468474   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.468655   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.468665   58316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:45:59.573237   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:45:59.573264   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573558   58316 buildroot.go:166] provisioning hostname "old-k8s-version-210420"
	I0520 11:45:59.573586   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.576245   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.576770   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.576778   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576957   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577122   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577267   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.577413   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.577620   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.577639   58316 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210420 && echo "old-k8s-version-210420" | sudo tee /etc/hostname
	I0520 11:45:59.695529   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210420
	
	I0520 11:45:59.695573   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.698545   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698808   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.698835   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698964   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.699144   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699331   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699427   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.699594   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.699744   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.699767   58316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:45:59.814736   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:59.814765   58316 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:45:59.814820   58316 buildroot.go:174] setting up certificates
	I0520 11:45:59.814832   58316 provision.go:84] configureAuth start
	I0520 11:45:59.814841   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.815158   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.817591   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.817917   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.817946   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.818103   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.820205   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820474   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.820499   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820625   58316 provision.go:143] copyHostCerts
	I0520 11:45:59.820681   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:45:59.820698   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:45:59.820782   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:45:59.820887   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:45:59.820896   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:45:59.820921   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:45:59.821006   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:45:59.821015   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:45:59.821038   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:45:59.821092   58316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210420 san=[127.0.0.1 192.168.83.100 localhost minikube old-k8s-version-210420]
	I0520 11:46:00.093158   58316 provision.go:177] copyRemoteCerts
	I0520 11:46:00.093218   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:00.093251   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.095725   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096033   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.096055   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096180   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.096371   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.096549   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.096693   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.179313   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:00.203235   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 11:46:00.226670   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:00.250832   58316 provision.go:87] duration metric: took 435.987724ms to configureAuth
	I0520 11:46:00.250868   58316 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:00.251054   58316 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:46:00.251183   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.253679   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.253986   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.254020   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.254118   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.254305   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254489   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254626   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.254777   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.255000   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.255023   58316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:00.512174   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:00.512207   58316 machine.go:97] duration metric: took 1.047011014s to provisionDockerMachine
	I0520 11:46:00.512222   58316 start.go:293] postStartSetup for "old-k8s-version-210420" (driver="kvm2")
	I0520 11:46:00.512236   58316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:00.512259   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.512546   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:00.512577   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.515025   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515358   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.515386   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515512   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.515696   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.515879   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.516004   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.599956   58316 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:00.604374   58316 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:00.604399   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:00.604486   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:00.604575   58316 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:00.604704   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:00.614371   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:00.638093   58316 start.go:296] duration metric: took 125.858293ms for postStartSetup
	I0520 11:46:00.638127   58316 fix.go:56] duration metric: took 19.508279908s for fixHost
	I0520 11:46:00.638151   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.640770   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641148   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.641192   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641352   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.641539   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641702   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641844   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.641985   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.642192   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.642207   58316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:00.745686   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205560.721975623
	
	I0520 11:46:00.745708   58316 fix.go:216] guest clock: 1716205560.721975623
	I0520 11:46:00.745716   58316 fix.go:229] Guest: 2024-05-20 11:46:00.721975623 +0000 UTC Remote: 2024-05-20 11:46:00.638130573 +0000 UTC m=+208.973304879 (delta=83.84505ms)
	I0520 11:46:00.745736   58316 fix.go:200] guest clock delta is within tolerance: 83.84505ms
	I0520 11:46:00.745740   58316 start.go:83] releasing machines lock for "old-k8s-version-210420", held for 19.615924008s
	I0520 11:46:00.745762   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.746032   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:00.748741   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749123   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.749146   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749288   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749932   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.750021   58316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:00.750070   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.750200   58316 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:00.750225   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.752648   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.752923   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753019   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753045   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753156   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753244   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753273   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753343   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753420   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753519   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753582   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753650   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.753744   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753879   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	W0520 11:46:00.839633   58316 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:00.839741   58316 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:00.861801   58316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:01.011737   58316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:01.018898   58316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:01.018964   58316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:01.035600   58316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:01.035622   58316 start.go:494] detecting cgroup driver to use...
	I0520 11:46:01.035698   58316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:01.051692   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:01.065994   58316 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:01.066083   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:01.079249   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:01.093862   58316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:01.206394   58316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:01.362764   58316 docker.go:233] disabling docker service ...
	I0520 11:46:01.362842   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:01.378012   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:01.393231   58316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:01.535932   58316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:01.677217   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:01.691602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:01.716408   58316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 11:46:01.716457   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.729460   58316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:01.729541   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.740455   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.751846   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.763174   58316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:01.775353   58316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:01.786021   58316 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:01.786088   58316 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:01.801529   58316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:01.812523   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:01.927496   58316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:02.084059   58316 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:02.084137   58316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:02.089878   58316 start.go:562] Will wait 60s for crictl version
	I0520 11:46:02.089925   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:02.094050   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:02.134859   58316 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:02.134936   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.166705   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.196374   58316 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 11:46:00.769810   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Start
	I0520 11:46:00.769988   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring networks are active...
	I0520 11:46:00.770615   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring network default is active
	I0520 11:46:00.770928   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring network mk-embed-certs-274976 is active
	I0520 11:46:00.771273   58398 main.go:141] libmachine: (embed-certs-274976) Getting domain xml...
	I0520 11:46:00.771977   58398 main.go:141] libmachine: (embed-certs-274976) Creating domain...
	I0520 11:46:02.023816   58398 main.go:141] libmachine: (embed-certs-274976) Waiting to get IP...
	I0520 11:46:02.024616   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.025031   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.025111   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.025009   59335 retry.go:31] will retry after 218.274206ms: waiting for machine to come up
	I0520 11:46:02.244661   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.245135   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.245163   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.245110   59335 retry.go:31] will retry after 359.160276ms: waiting for machine to come up
	I0520 11:46:02.606589   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.607106   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.607129   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.607044   59335 retry.go:31] will retry after 462.479953ms: waiting for machine to come up
	I0520 11:46:02.197655   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:02.200774   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201186   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:02.201211   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201385   58316 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:02.205589   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:02.217983   58316 kubeadm.go:877] updating cluster {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:02.218091   58316 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:46:02.218135   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:02.265907   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:02.265970   58316 ssh_runner.go:195] Run: which lz4
	I0520 11:46:02.271466   58316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:02.276993   58316 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:02.277017   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 11:46:04.082571   58316 crio.go:462] duration metric: took 1.811145307s to copy over tarball
	I0520 11:46:04.082648   58316 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:03.071686   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:03.072242   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:03.072271   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:03.072198   59335 retry.go:31] will retry after 531.663033ms: waiting for machine to come up
	I0520 11:46:03.606121   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:03.606629   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:03.606653   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:03.606581   59335 retry.go:31] will retry after 690.866851ms: waiting for machine to come up
	I0520 11:46:04.299326   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:04.299799   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:04.299829   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:04.299750   59335 retry.go:31] will retry after 928.456543ms: waiting for machine to come up
	I0520 11:46:05.230115   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:05.230568   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:05.230586   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:05.230539   59335 retry.go:31] will retry after 761.597702ms: waiting for machine to come up
	I0520 11:46:05.993511   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:05.993897   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:05.993930   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:05.993854   59335 retry.go:31] will retry after 1.405329972s: waiting for machine to come up
	I0520 11:46:07.401706   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:07.402246   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:07.402312   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:07.402223   59335 retry.go:31] will retry after 1.717177668s: waiting for machine to come up
	I0520 11:46:06.952510   58316 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.869830998s)
	I0520 11:46:06.952533   58316 crio.go:469] duration metric: took 2.869936445s to extract the tarball
	I0520 11:46:06.952541   58316 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:06.996387   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:07.032067   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:07.032093   58316 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:46:07.032230   58316 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.032216   58316 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 11:46:07.032350   58316 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.032411   58316 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.032692   58316 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.033198   58316 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034239   58316 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.034327   58316 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 11:46:07.034404   58316 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.034442   58316 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.207835   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.209772   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 11:46:07.278794   58316 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 11:46:07.278843   58316 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.278895   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.284049   58316 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 11:46:07.284098   58316 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 11:46:07.284149   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.286345   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.289361   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 11:46:07.331714   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 11:46:07.337994   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 11:46:07.373533   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.380715   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.385188   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.390552   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.410661   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.481535   58316 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 11:46:07.481576   58316 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.481626   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494357   58316 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 11:46:07.494374   58316 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 11:46:07.494402   58316 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.494404   58316 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.507981   58316 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 11:46:07.508023   58316 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.508069   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.516898   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.517006   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.517019   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.517060   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.517248   58316 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 11:46:07.517282   58316 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.517309   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.581093   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.581200   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 11:46:07.622963   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 11:46:07.623008   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 11:46:07.623069   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 11:46:07.632151   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 11:46:07.915373   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:08.058938   58316 cache_images.go:92] duration metric: took 1.026816798s to LoadCachedImages
	W0520 11:46:08.059038   58316 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0520 11:46:08.059057   58316 kubeadm.go:928] updating node { 192.168.83.100 8443 v1.20.0 crio true true} ...
	I0520 11:46:08.059208   58316 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:08.059295   58316 ssh_runner.go:195] Run: crio config
	I0520 11:46:08.131486   58316 cni.go:84] Creating CNI manager for ""
	I0520 11:46:08.131511   58316 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:08.131532   58316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:08.131561   58316 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.100 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210420 NodeName:old-k8s-version-210420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:46:08.131724   58316 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210420"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:08.131800   58316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:46:08.143109   58316 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:08.143202   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:08.153988   58316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0520 11:46:08.172321   58316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:08.190157   58316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0520 11:46:08.208084   58316 ssh_runner.go:195] Run: grep 192.168.83.100	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:08.212211   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:08.225134   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:08.350083   58316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:08.368185   58316 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420 for IP: 192.168.83.100
	I0520 11:46:08.368202   58316 certs.go:194] generating shared ca certs ...
	I0520 11:46:08.368216   58316 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.368372   58316 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:08.368422   58316 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:08.368435   58316 certs.go:256] generating profile certs ...
	I0520 11:46:08.368552   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key
	I0520 11:46:08.368620   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9
	I0520 11:46:08.368664   58316 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key
	I0520 11:46:08.368806   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:08.368839   58316 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:08.368852   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:08.368884   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:08.368914   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:08.368967   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:08.369020   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:08.369674   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:08.402860   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:08.432963   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:08.458740   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:08.490622   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 11:46:08.529627   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:08.563064   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:08.604332   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 11:46:08.632006   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:08.656708   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:08.680552   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:08.704942   58316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:08.723543   58316 ssh_runner.go:195] Run: openssl version
	I0520 11:46:08.729776   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:08.742608   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747374   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747417   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.753504   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:08.765976   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:08.777365   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781823   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.787550   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:08.798382   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:08.810174   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814810   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.820897   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:08.832012   58316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:08.836507   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:08.842727   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:08.849208   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:08.855730   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:08.862050   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:08.867819   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:08.873633   58316 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:08.873750   58316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:08.873800   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.912922   58316 cri.go:89] found id: ""
	I0520 11:46:08.913010   58316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:08.924046   58316 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:08.924084   58316 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:08.924092   58316 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:08.924154   58316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:08.934812   58316 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:08.936218   58316 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210420" does not appear in /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:08.937232   58316 kubeconfig.go:62] /home/jenkins/minikube-integration/18925-3819/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210420" cluster setting kubeconfig missing "old-k8s-version-210420" context setting]
	I0520 11:46:08.938681   58316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.947598   58316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:08.958245   58316 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.100
	I0520 11:46:08.958280   58316 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:08.958293   58316 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:08.958349   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.996333   58316 cri.go:89] found id: ""
	I0520 11:46:08.996413   58316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:09.015359   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:09.026711   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:09.026733   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:09.026784   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:09.036886   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:09.036954   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:09.046584   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:09.056035   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:09.056095   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:09.067501   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.077064   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:09.077123   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.086820   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:09.096267   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:09.096346   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:09.106407   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:09.116143   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.240367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.968603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.251776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.363870   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.466177   58316 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:10.466284   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:10.967012   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:11.467113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:09.121909   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:09.122323   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:09.122358   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:09.122261   59335 retry.go:31] will retry after 1.49678652s: waiting for machine to come up
	I0520 11:46:10.620875   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:10.621226   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:10.621255   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:10.621184   59335 retry.go:31] will retry after 2.772139112s: waiting for machine to come up
	I0520 11:46:11.966365   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.466509   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.966500   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.467280   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.966838   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.466319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.966768   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.466848   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.966953   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:16.466890   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.396309   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:13.396705   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:13.396725   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:13.396678   59335 retry.go:31] will retry after 2.987909039s: waiting for machine to come up
	I0520 11:46:16.386007   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:16.386500   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:16.386527   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:16.386460   59335 retry.go:31] will retry after 4.351760111s: waiting for machine to come up
	I0520 11:46:16.967036   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.466679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.966631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.466439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.966783   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.967319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.466315   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.966513   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:21.467069   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.138009   58840 start.go:364] duration metric: took 2m20.094710429s to acquireMachinesLock for "default-k8s-diff-port-668445"
	I0520 11:46:22.138085   58840 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:22.138096   58840 fix.go:54] fixHost starting: 
	I0520 11:46:22.138535   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:22.138577   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:22.154345   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0520 11:46:22.154701   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:22.155172   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:22.155198   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:22.155492   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:22.155667   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:22.155834   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:22.157486   58840 fix.go:112] recreateIfNeeded on default-k8s-diff-port-668445: state=Stopped err=<nil>
	I0520 11:46:22.157525   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	W0520 11:46:22.157703   58840 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:22.159724   58840 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-668445" ...
	I0520 11:46:20.742954   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.743480   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has current primary IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.743503   58398 main.go:141] libmachine: (embed-certs-274976) Found IP for machine: 192.168.50.167
	I0520 11:46:20.743517   58398 main.go:141] libmachine: (embed-certs-274976) Reserving static IP address...
	I0520 11:46:20.743907   58398 main.go:141] libmachine: (embed-certs-274976) Reserved static IP address: 192.168.50.167
	I0520 11:46:20.743930   58398 main.go:141] libmachine: (embed-certs-274976) Waiting for SSH to be available...
	I0520 11:46:20.743955   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "embed-certs-274976", mac: "52:54:00:8b:95:29", ip: "192.168.50.167"} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.744000   58398 main.go:141] libmachine: (embed-certs-274976) DBG | skip adding static IP to network mk-embed-certs-274976 - found existing host DHCP lease matching {name: "embed-certs-274976", mac: "52:54:00:8b:95:29", ip: "192.168.50.167"}
	I0520 11:46:20.744017   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Getting to WaitForSSH function...
	I0520 11:46:20.746685   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.747071   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.747112   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.747244   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Using SSH client type: external
	I0520 11:46:20.747259   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa (-rw-------)
	I0520 11:46:20.747278   58398 main.go:141] libmachine: (embed-certs-274976) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:46:20.747287   58398 main.go:141] libmachine: (embed-certs-274976) DBG | About to run SSH command:
	I0520 11:46:20.747295   58398 main.go:141] libmachine: (embed-certs-274976) DBG | exit 0
	I0520 11:46:20.872997   58398 main.go:141] libmachine: (embed-certs-274976) DBG | SSH cmd err, output: <nil>: 
	I0520 11:46:20.873377   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetConfigRaw
	I0520 11:46:20.874039   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:20.876476   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.876827   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.876859   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.877169   58398 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/config.json ...
	I0520 11:46:20.877407   58398 machine.go:94] provisionDockerMachine start ...
	I0520 11:46:20.877432   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:20.877643   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:20.879744   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.880089   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.880113   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.880328   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:20.880529   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.880673   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.880840   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:20.881034   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:20.881263   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:20.881275   58398 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:46:20.993535   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:46:20.993572   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:20.993813   58398 buildroot.go:166] provisioning hostname "embed-certs-274976"
	I0520 11:46:20.993841   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:20.994034   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:20.996567   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.996922   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.996970   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.997092   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:20.997280   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.997437   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.997580   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:20.997734   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:20.997993   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:20.998013   58398 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274976 && echo "embed-certs-274976" | sudo tee /etc/hostname
	I0520 11:46:21.125482   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274976
	
	I0520 11:46:21.125512   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.128347   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.128717   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.128746   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.128916   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.129120   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.129296   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.129436   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.129613   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:21.129774   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:21.129789   58398 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274976/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:46:21.250287   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:46:21.250320   58398 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:46:21.250371   58398 buildroot.go:174] setting up certificates
	I0520 11:46:21.250385   58398 provision.go:84] configureAuth start
	I0520 11:46:21.250403   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:21.250726   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:21.253610   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.254038   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.254053   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.254303   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.256633   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.257007   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.257046   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.257206   58398 provision.go:143] copyHostCerts
	I0520 11:46:21.257266   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:46:21.257289   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:46:21.257369   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:46:21.257489   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:46:21.257499   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:46:21.257537   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:46:21.257617   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:46:21.257626   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:46:21.257661   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:46:21.257729   58398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274976 san=[127.0.0.1 192.168.50.167 embed-certs-274976 localhost minikube]
	I0520 11:46:21.458096   58398 provision.go:177] copyRemoteCerts
	I0520 11:46:21.458152   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:21.458178   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.460662   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.461006   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.461042   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.461205   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.461370   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.461545   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.461705   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:21.547703   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:21.574345   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 11:46:21.598843   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:21.624064   58398 provision.go:87] duration metric: took 373.664517ms to configureAuth
	I0520 11:46:21.624091   58398 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:21.624262   58398 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:21.624330   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.626996   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.627378   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.627406   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.627556   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.627711   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.627902   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.628077   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.628231   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:21.628402   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:21.628420   58398 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:21.894896   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:21.894923   58398 machine.go:97] duration metric: took 1.017499149s to provisionDockerMachine
	I0520 11:46:21.894936   58398 start.go:293] postStartSetup for "embed-certs-274976" (driver="kvm2")
	I0520 11:46:21.894948   58398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:21.894966   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:21.895279   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:21.895304   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.897866   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.898160   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.898193   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.898292   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.898484   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.898663   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.898779   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:21.984571   58398 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:21.988891   58398 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:21.988918   58398 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:21.988995   58398 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:21.989129   58398 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:21.989257   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:21.999219   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:22.023032   58398 start.go:296] duration metric: took 128.082095ms for postStartSetup
	I0520 11:46:22.023076   58398 fix.go:56] duration metric: took 21.277188857s for fixHost
	I0520 11:46:22.023094   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.025971   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.026362   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.026393   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.026536   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.026725   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.026867   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.027017   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.027246   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:22.027424   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:22.027437   58398 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:22.137813   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205582.113779638
	
	I0520 11:46:22.137840   58398 fix.go:216] guest clock: 1716205582.113779638
	I0520 11:46:22.137850   58398 fix.go:229] Guest: 2024-05-20 11:46:22.113779638 +0000 UTC Remote: 2024-05-20 11:46:22.023079043 +0000 UTC m=+224.083510648 (delta=90.700595ms)
	I0520 11:46:22.137898   58398 fix.go:200] guest clock delta is within tolerance: 90.700595ms
	I0520 11:46:22.137906   58398 start.go:83] releasing machines lock for "embed-certs-274976", held for 21.39205527s
	I0520 11:46:22.137943   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.138222   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:22.140951   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.141358   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.141388   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.141558   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142024   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142187   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142245   58398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:22.142291   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.142395   58398 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:22.142417   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.144896   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145116   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145386   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.145419   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145490   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.145518   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145525   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.145647   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.145703   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.145827   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.145849   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.146019   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.146022   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:22.146175   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	W0520 11:46:22.226345   58398 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:22.226421   58398 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:22.248678   58398 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:22.404298   58398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:22.410174   58398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:22.410249   58398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:22.426286   58398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:22.426311   58398 start.go:494] detecting cgroup driver to use...
	I0520 11:46:22.426369   58398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:22.441746   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:22.455445   58398 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:22.455515   58398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:22.469441   58398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:22.485963   58398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:22.604443   58398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:22.758542   58398 docker.go:233] disabling docker service ...
	I0520 11:46:22.758628   58398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:22.773371   58398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:22.786871   58398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:22.940022   58398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:23.076857   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:23.091980   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:23.111380   58398 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:46:23.111434   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.122532   58398 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:23.122597   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.134355   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.145484   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.156776   58398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:23.169017   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.180681   58398 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.205664   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.217267   58398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:23.228490   58398 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:23.228555   58398 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:23.243578   58398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:23.255394   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:23.387407   58398 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:23.537543   58398 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:23.537613   58398 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:23.542886   58398 start.go:562] Will wait 60s for crictl version
	I0520 11:46:23.542934   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:46:23.546740   58398 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:23.587662   58398 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:23.587746   58398 ssh_runner.go:195] Run: crio --version
	I0520 11:46:23.615735   58398 ssh_runner.go:195] Run: crio --version
	I0520 11:46:23.648727   58398 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:46:21.966776   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.466411   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.967037   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.467218   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.966606   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.466990   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.966805   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.966829   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:26.466968   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.161646   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Start
	I0520 11:46:22.161813   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring networks are active...
	I0520 11:46:22.162609   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring network default is active
	I0520 11:46:22.163020   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring network mk-default-k8s-diff-port-668445 is active
	I0520 11:46:22.163509   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Getting domain xml...
	I0520 11:46:22.164146   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Creating domain...
	I0520 11:46:23.456104   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting to get IP...
	I0520 11:46:23.457157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.457703   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.457735   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:23.457652   59511 retry.go:31] will retry after 262.047261ms: waiting for machine to come up
	I0520 11:46:23.721329   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.721756   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.721815   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:23.721742   59511 retry.go:31] will retry after 301.868596ms: waiting for machine to come up
	I0520 11:46:24.025477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.026042   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.026072   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:24.026006   59511 retry.go:31] will retry after 431.470276ms: waiting for machine to come up
	I0520 11:46:24.458962   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.459477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.459564   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:24.459450   59511 retry.go:31] will retry after 566.093902ms: waiting for machine to come up
	I0520 11:46:25.027215   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.027787   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.027812   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:25.027729   59511 retry.go:31] will retry after 471.2178ms: waiting for machine to come up
	I0520 11:46:25.500575   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.501067   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.501106   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:25.501010   59511 retry.go:31] will retry after 682.542082ms: waiting for machine to come up
	I0520 11:46:26.184886   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:26.185367   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:26.185400   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:26.185315   59511 retry.go:31] will retry after 973.33325ms: waiting for machine to come up
	I0520 11:46:23.650320   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:23.653452   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:23.653919   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:23.653962   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:23.654259   58398 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:23.658592   58398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:23.671310   58398 kubeadm.go:877] updating cluster {Name:embed-certs-274976 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:23.671460   58398 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:46:23.671523   58398 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:23.715873   58398 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:46:23.715947   58398 ssh_runner.go:195] Run: which lz4
	I0520 11:46:23.720362   58398 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:23.724821   58398 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:23.724852   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:46:25.254824   58398 crio.go:462] duration metric: took 1.534501958s to copy over tarball
	I0520 11:46:25.254937   58398 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:27.527739   58398 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.272764169s)
	I0520 11:46:27.527773   58398 crio.go:469] duration metric: took 2.272914971s to extract the tarball
	I0520 11:46:27.527780   58398 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:27.564738   58398 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:27.613196   58398 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:46:27.613220   58398 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:46:27.613229   58398 kubeadm.go:928] updating node { 192.168.50.167 8443 v1.30.1 crio true true} ...
	I0520 11:46:27.613321   58398 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:27.613381   58398 ssh_runner.go:195] Run: crio config
	I0520 11:46:27.669907   58398 cni.go:84] Creating CNI manager for ""
	I0520 11:46:27.669931   58398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:27.669948   58398 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:27.669969   58398 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.167 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274976 NodeName:embed-certs-274976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:46:27.670117   58398 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274976"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:27.670180   58398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:46:27.680414   58398 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:27.680473   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:27.690431   58398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0520 11:46:27.707625   58398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:27.724120   58398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0520 11:46:27.744218   58398 ssh_runner.go:195] Run: grep 192.168.50.167	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:27.748173   58398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:27.760752   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:27.891061   58398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:27.909326   58398 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976 for IP: 192.168.50.167
	I0520 11:46:27.909354   58398 certs.go:194] generating shared ca certs ...
	I0520 11:46:27.909387   58398 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:27.909574   58398 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:27.909637   58398 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:27.909650   58398 certs.go:256] generating profile certs ...
	I0520 11:46:27.909754   58398 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/client.key
	I0520 11:46:27.909843   58398 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.key.d6be4a1f
	I0520 11:46:27.909922   58398 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.key
	I0520 11:46:27.910072   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:27.910127   58398 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:27.910146   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:27.910183   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:27.910208   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:27.910229   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:27.910278   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:27.911030   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:27.949250   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:26.966832   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.466585   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.967225   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.466286   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.966679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.466425   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.966440   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.467175   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.966457   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.466596   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.160027   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:27.160492   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:27.160515   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:27.160435   59511 retry.go:31] will retry after 1.263563226s: waiting for machine to come up
	I0520 11:46:28.426016   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:28.426503   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:28.426555   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:28.426463   59511 retry.go:31] will retry after 1.407211588s: waiting for machine to come up
	I0520 11:46:29.835680   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:29.836101   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:29.836121   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:29.836058   59511 retry.go:31] will retry after 2.320334078s: waiting for machine to come up
	I0520 11:46:27.987645   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:28.036652   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:28.083998   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0520 11:46:28.116104   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:28.154695   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:28.181013   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:46:28.213382   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:28.243550   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:28.268537   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:28.292856   58398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:28.310801   58398 ssh_runner.go:195] Run: openssl version
	I0520 11:46:28.316724   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:28.327461   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.332286   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.332345   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.339005   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:28.351192   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:28.362639   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.367566   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.367623   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.373275   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:28.384549   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:28.395600   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.400136   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.400188   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.405914   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:28.417533   58398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:28.422327   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:28.428746   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:28.434840   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:28.441227   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:28.447310   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:28.453450   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:28.462637   58398 kubeadm.go:391] StartCluster: {Name:embed-certs-274976 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:28.462746   58398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:28.462828   58398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:28.501759   58398 cri.go:89] found id: ""
	I0520 11:46:28.501826   58398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:28.512536   58398 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:28.512554   58398 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:28.512559   58398 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:28.512603   58398 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:28.522288   58398 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:28.523266   58398 kubeconfig.go:125] found "embed-certs-274976" server: "https://192.168.50.167:8443"
	I0520 11:46:28.525188   58398 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:28.535056   58398 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.167
	I0520 11:46:28.535092   58398 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:28.535104   58398 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:28.535157   58398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:28.573317   58398 cri.go:89] found id: ""
	I0520 11:46:28.573429   58398 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:28.593111   58398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:28.603312   58398 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:28.603345   58398 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:28.603398   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:28.612794   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:28.612870   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:28.622819   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:28.634550   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:28.634614   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:28.644568   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:28.654918   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:28.654980   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:28.665581   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:28.677090   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:28.677167   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:28.687152   58398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:28.697505   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:28.833183   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:29.963435   58398 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.130211655s)
	I0520 11:46:29.963486   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.185882   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.248946   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.334924   58398 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:30.335020   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.835637   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.336028   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.354314   58398 api_server.go:72] duration metric: took 1.01937754s to wait for apiserver process to appear ...
	I0520 11:46:31.354343   58398 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:46:31.354365   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:31.354916   58398 api_server.go:269] stopped: https://192.168.50.167:8443/healthz: Get "https://192.168.50.167:8443/healthz": dial tcp 192.168.50.167:8443: connect: connection refused
	I0520 11:46:31.854629   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.299609   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.299653   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.299669   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.370279   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.370304   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.370317   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.378426   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.378448   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.855034   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.860164   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:34.860205   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:35.354583   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:35.362461   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:35.362489   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:35.854836   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:35.859322   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I0520 11:46:35.867050   58398 api_server.go:141] control plane version: v1.30.1
	I0520 11:46:35.867077   58398 api_server.go:131] duration metric: took 4.512726716s to wait for apiserver health ...
	I0520 11:46:35.867088   58398 cni.go:84] Creating CNI manager for ""
	I0520 11:46:35.867096   58398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:35.869427   58398 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:46:31.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.467311   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.966392   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.967113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.467393   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.966336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.466951   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.967302   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:36.467339   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.158195   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:32.158685   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:32.158712   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:32.158637   59511 retry.go:31] will retry after 2.733920459s: waiting for machine to come up
	I0520 11:46:34.893892   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:34.894436   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:34.894480   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:34.894374   59511 retry.go:31] will retry after 3.119057685s: waiting for machine to come up
	I0520 11:46:35.871058   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:46:35.898211   58398 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:46:35.942295   58398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:46:35.954120   58398 system_pods.go:59] 8 kube-system pods found
	I0520 11:46:35.954156   58398 system_pods.go:61] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:46:35.954171   58398 system_pods.go:61] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:46:35.954181   58398 system_pods.go:61] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:46:35.954194   58398 system_pods.go:61] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:46:35.954204   58398 system_pods.go:61] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0520 11:46:35.954213   58398 system_pods.go:61] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:46:35.954220   58398 system_pods.go:61] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:46:35.954229   58398 system_pods.go:61] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 11:46:35.954239   58398 system_pods.go:74] duration metric: took 11.924168ms to wait for pod list to return data ...
	I0520 11:46:35.954249   58398 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:46:35.957627   58398 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:46:35.957664   58398 node_conditions.go:123] node cpu capacity is 2
	I0520 11:46:35.957679   58398 node_conditions.go:105] duration metric: took 3.421729ms to run NodePressure ...
	I0520 11:46:35.957700   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:36.235314   58398 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:46:36.241075   58398 kubeadm.go:733] kubelet initialised
	I0520 11:46:36.241101   58398 kubeadm.go:734] duration metric: took 5.759765ms waiting for restarted kubelet to initialise ...
	I0520 11:46:36.241110   58398 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:36.248030   58398 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.254230   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.254263   58398 pod_ready.go:81] duration metric: took 6.20401ms for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.254274   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.254281   58398 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.259348   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "etcd-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.259379   58398 pod_ready.go:81] duration metric: took 5.088735ms for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.259392   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "etcd-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.259402   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.266963   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.266997   58398 pod_ready.go:81] duration metric: took 7.580054ms for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.267010   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.267020   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.346321   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.346353   58398 pod_ready.go:81] duration metric: took 79.314801ms for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.346363   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.346371   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.746645   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-proxy-6n9d5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.746674   58398 pod_ready.go:81] duration metric: took 400.291751ms for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.746686   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-proxy-6n9d5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.746692   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:37.146413   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.146443   58398 pod_ready.go:81] duration metric: took 399.744378ms for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:37.146454   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.146465   58398 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:37.545943   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.545966   58398 pod_ready.go:81] duration metric: took 399.491619ms for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:37.545974   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.545981   58398 pod_ready.go:38] duration metric: took 1.304861628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:37.545997   58398 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:46:37.558076   58398 ops.go:34] apiserver oom_adj: -16
	I0520 11:46:37.558097   58398 kubeadm.go:591] duration metric: took 9.045531679s to restartPrimaryControlPlane
	I0520 11:46:37.558106   58398 kubeadm.go:393] duration metric: took 9.095477048s to StartCluster
	I0520 11:46:37.558125   58398 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:37.558199   58398 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:37.559701   58398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:37.559920   58398 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:46:37.561751   58398 out.go:177] * Verifying Kubernetes components...
	I0520 11:46:37.560038   58398 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:46:37.560143   58398 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:37.563096   58398 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274976"
	I0520 11:46:37.563106   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:37.563124   58398 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274976"
	W0520 11:46:37.563133   58398 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:46:37.563156   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.563164   58398 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274976"
	I0520 11:46:37.563179   58398 addons.go:69] Setting metrics-server=true in profile "embed-certs-274976"
	I0520 11:46:37.563193   58398 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274976"
	I0520 11:46:37.563204   58398 addons.go:234] Setting addon metrics-server=true in "embed-certs-274976"
	W0520 11:46:37.563215   58398 addons.go:243] addon metrics-server should already be in state true
	I0520 11:46:37.563241   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.563516   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563558   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.563575   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563600   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.563614   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563638   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.577924   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0520 11:46:37.578096   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0520 11:46:37.578120   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0520 11:46:37.578322   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578480   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578527   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578851   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.578879   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.578982   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.579003   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.579019   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.579038   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.579202   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579342   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579364   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579385   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.579824   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.579867   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.579871   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.579895   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.582214   58398 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274976"
	W0520 11:46:37.582230   58398 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:46:37.582249   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.582511   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.582542   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.594308   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0520 11:46:37.594672   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.595047   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.595067   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.595525   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.595551   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0520 11:46:37.595709   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.596037   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.596545   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.596568   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.596980   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.597579   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.597599   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.597802   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.599963   58398 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:46:37.598457   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I0520 11:46:37.601485   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:46:37.601505   58398 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:46:37.601527   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.601793   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.602219   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.602238   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.602502   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.602688   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.604122   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.604188   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.605753   58398 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:37.604566   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.604874   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.607190   58398 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:37.607205   58398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:46:37.607221   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.607240   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.607263   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.607414   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.607729   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.610088   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.610462   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.610483   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.610738   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.610912   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.611072   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.611212   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.613655   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37269
	I0520 11:46:37.613938   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.614386   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.614403   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.614653   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.614826   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.616151   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.616331   58398 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:37.616345   58398 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:46:37.616362   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.619627   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.620050   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.620072   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.620214   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.620365   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.620480   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.620660   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.734680   58398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:37.751098   58398 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274976" to be "Ready" ...
	I0520 11:46:37.842846   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:46:37.842867   58398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:46:37.853894   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:37.862145   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:46:37.862167   58398 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:46:37.883656   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:37.884046   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:37.884061   58398 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:46:37.941086   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:38.187392   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.187415   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.187819   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.187841   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.187855   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.187861   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.187822   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.188095   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.188115   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.193695   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.193710   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.193926   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.193942   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.813454   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.813472   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.813794   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.813813   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.813821   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.813843   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.813894   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.814140   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.814154   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897046   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.897078   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.897406   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.897453   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.897469   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897494   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.897505   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.897755   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.897759   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.897772   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897784   58398 addons.go:470] Verifying addon metrics-server=true in "embed-certs-274976"
	I0520 11:46:38.899910   58398 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0520 11:46:36.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.467251   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.967153   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.967236   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.466627   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.966560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.466358   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.967165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:41.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.014993   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:38.015405   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:38.015458   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:38.015373   59511 retry.go:31] will retry after 4.505016985s: waiting for machine to come up
	I0520 11:46:38.901272   58398 addons.go:505] duration metric: took 1.341257876s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0520 11:46:39.754549   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:41.758717   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:43.865868   57519 start.go:364] duration metric: took 57.7314587s to acquireMachinesLock for "no-preload-814599"
	I0520 11:46:43.865920   57519 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:43.865931   57519 fix.go:54] fixHost starting: 
	I0520 11:46:43.866311   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:43.866347   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:43.885634   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0520 11:46:43.886068   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:43.886620   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:46:43.886659   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:43.887013   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:43.887200   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:46:43.887337   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:46:43.888902   57519 fix.go:112] recreateIfNeeded on no-preload-814599: state=Stopped err=<nil>
	I0520 11:46:43.888958   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	W0520 11:46:43.889108   57519 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:43.891052   57519 out.go:177] * Restarting existing kvm2 VM for "no-preload-814599" ...
	I0520 11:46:42.521630   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.522114   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Found IP for machine: 192.168.61.221
	I0520 11:46:42.522140   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Reserving static IP address...
	I0520 11:46:42.522157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has current primary IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.522534   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Reserved static IP address: 192.168.61.221
	I0520 11:46:42.522556   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for SSH to be available...
	I0520 11:46:42.522571   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-668445", mac: "52:54:00:55:ba:46", ip: "192.168.61.221"} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.522593   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | skip adding static IP to network mk-default-k8s-diff-port-668445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-668445", mac: "52:54:00:55:ba:46", ip: "192.168.61.221"}
	I0520 11:46:42.522613   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Getting to WaitForSSH function...
	I0520 11:46:42.524812   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.525199   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.525221   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.525362   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Using SSH client type: external
	I0520 11:46:42.525391   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa (-rw-------)
	I0520 11:46:42.525427   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:46:42.525443   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | About to run SSH command:
	I0520 11:46:42.525455   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | exit 0
	I0520 11:46:42.648689   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | SSH cmd err, output: <nil>: 
	I0520 11:46:42.649122   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetConfigRaw
	I0520 11:46:42.649745   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:42.652220   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.652571   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.652596   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.652821   58840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:46:42.653026   58840 machine.go:94] provisionDockerMachine start ...
	I0520 11:46:42.653046   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:42.653235   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.655453   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.655758   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.655777   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.655923   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.656089   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.656253   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.656384   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.656551   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.656729   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.656739   58840 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:46:42.761195   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:46:42.761225   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:42.761450   58840 buildroot.go:166] provisioning hostname "default-k8s-diff-port-668445"
	I0520 11:46:42.761477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:42.761666   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.764340   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.764670   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.764695   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.764801   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.764973   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.765131   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.765285   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.765472   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.765634   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.765647   58840 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-668445 && echo "default-k8s-diff-port-668445" | sudo tee /etc/hostname
	I0520 11:46:42.883179   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-668445
	
	I0520 11:46:42.883204   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.885913   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.886302   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.886347   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.886487   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.886699   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.886884   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.887040   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.887238   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.887419   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.887444   58840 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-668445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-668445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-668445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:46:43.001769   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:46:43.001792   58840 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:46:43.001825   58840 buildroot.go:174] setting up certificates
	I0520 11:46:43.001834   58840 provision.go:84] configureAuth start
	I0520 11:46:43.001842   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:43.002064   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:43.004606   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.004897   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.004944   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.005087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.006929   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.007270   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.007296   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.007386   58840 provision.go:143] copyHostCerts
	I0520 11:46:43.007461   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:46:43.007470   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:46:43.007523   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:46:43.007608   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:46:43.007617   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:46:43.007637   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:46:43.007686   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:46:43.007693   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:46:43.007709   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:46:43.007753   58840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-668445 san=[127.0.0.1 192.168.61.221 default-k8s-diff-port-668445 localhost minikube]
	I0520 11:46:43.165780   58840 provision.go:177] copyRemoteCerts
	I0520 11:46:43.165835   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:43.165860   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.168782   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.169136   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.169181   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.169343   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.169575   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.169717   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.169892   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.252175   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:43.280313   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0520 11:46:43.306901   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:43.331191   58840 provision.go:87] duration metric: took 329.344445ms to configureAuth
	I0520 11:46:43.331227   58840 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:43.331478   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:43.331576   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.334173   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.334573   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.334603   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.334770   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.334961   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.335108   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.335236   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.335381   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:43.335593   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:43.335615   58840 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:43.630180   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:43.630213   58840 machine.go:97] duration metric: took 977.172725ms to provisionDockerMachine
	I0520 11:46:43.630229   58840 start.go:293] postStartSetup for "default-k8s-diff-port-668445" (driver="kvm2")
	I0520 11:46:43.630244   58840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:43.630270   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.630597   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:43.630635   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.633307   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.633650   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.633680   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.633798   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.633976   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.634141   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.634264   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.719350   58840 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:43.723320   58840 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:43.723339   58840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:43.723396   58840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:43.723478   58840 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:43.723564   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:43.732422   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:43.757138   58840 start.go:296] duration metric: took 126.894206ms for postStartSetup
	I0520 11:46:43.757179   58840 fix.go:56] duration metric: took 21.619082215s for fixHost
	I0520 11:46:43.757202   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.759486   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.759860   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.759887   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.760065   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.760248   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.760425   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.760527   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.760654   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:43.760824   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:43.760839   58840 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:43.865691   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205603.841593335
	
	I0520 11:46:43.865713   58840 fix.go:216] guest clock: 1716205603.841593335
	I0520 11:46:43.865722   58840 fix.go:229] Guest: 2024-05-20 11:46:43.841593335 +0000 UTC Remote: 2024-05-20 11:46:43.757183475 +0000 UTC m=+161.848941663 (delta=84.40986ms)
	I0520 11:46:43.865777   58840 fix.go:200] guest clock delta is within tolerance: 84.40986ms
	I0520 11:46:43.865790   58840 start.go:83] releasing machines lock for "default-k8s-diff-port-668445", held for 21.727726024s
	I0520 11:46:43.865824   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.866087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:43.868681   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.869054   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.869076   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.869256   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.869792   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.869961   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.870035   58840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:43.870091   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.870174   58840 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:43.870195   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.872817   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873059   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873146   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.873253   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873487   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.873496   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.873537   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873684   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.873687   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.873861   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.873866   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.874033   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.874046   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.874234   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	W0520 11:46:43.973840   58840 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:43.973913   58840 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:43.980706   58840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:44.132625   58840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:44.140558   58840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:44.140629   58840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:44.158922   58840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:44.158946   58840 start.go:494] detecting cgroup driver to use...
	I0520 11:46:44.159006   58840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:44.179206   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:44.194183   58840 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:44.194254   58840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:44.208701   58840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:44.223086   58840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:44.342811   58840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:44.491358   58840 docker.go:233] disabling docker service ...
	I0520 11:46:44.491428   58840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:44.509165   58840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:44.526088   58840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:44.700452   58840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:44.833129   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:44.849090   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:44.870148   58840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:46:44.870222   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.880951   58840 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:44.881016   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.895038   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.908245   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.918844   58840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:44.929838   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.940446   58840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.960350   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.974587   58840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:44.985689   58840 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:44.985748   58840 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:44.999929   58840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:45.010732   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:45.140038   58840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:45.291656   58840 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:45.291728   58840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:45.297351   58840 start.go:562] Will wait 60s for crictl version
	I0520 11:46:45.297412   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:46:45.301297   58840 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:45.343635   58840 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:45.343703   58840 ssh_runner.go:195] Run: crio --version
	I0520 11:46:45.373914   58840 ssh_runner.go:195] Run: crio --version
	I0520 11:46:45.404596   58840 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:46:43.892523   57519 main.go:141] libmachine: (no-preload-814599) Calling .Start
	I0520 11:46:43.892686   57519 main.go:141] libmachine: (no-preload-814599) Ensuring networks are active...
	I0520 11:46:43.893370   57519 main.go:141] libmachine: (no-preload-814599) Ensuring network default is active
	I0520 11:46:43.893729   57519 main.go:141] libmachine: (no-preload-814599) Ensuring network mk-no-preload-814599 is active
	I0520 11:46:43.894105   57519 main.go:141] libmachine: (no-preload-814599) Getting domain xml...
	I0520 11:46:43.894847   57519 main.go:141] libmachine: (no-preload-814599) Creating domain...
	I0520 11:46:45.229992   57519 main.go:141] libmachine: (no-preload-814599) Waiting to get IP...
	I0520 11:46:45.230981   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.231451   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.231518   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.231438   59753 retry.go:31] will retry after 187.539159ms: waiting for machine to come up
	I0520 11:46:45.420854   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.421369   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.421393   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.421328   59753 retry.go:31] will retry after 327.709711ms: waiting for machine to come up
	I0520 11:46:45.750799   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.751384   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.751414   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.751334   59753 retry.go:31] will retry after 445.096849ms: waiting for machine to come up
	I0520 11:46:46.197899   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:46.198567   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:46.198598   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:46.198536   59753 retry.go:31] will retry after 561.029629ms: waiting for machine to come up
	I0520 11:46:41.967129   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.467212   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.966637   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.466403   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.966371   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.466743   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.967293   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.467205   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:46.466728   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.405909   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:45.408865   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:45.409399   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:45.409429   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:45.409655   58840 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:45.413815   58840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:45.426264   58840 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:45.426366   58840 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:46:45.426405   58840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:45.468938   58840 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:46:45.469021   58840 ssh_runner.go:195] Run: which lz4
	I0520 11:46:45.474957   58840 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:45.481006   58840 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:45.481039   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:46:44.255728   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:44.756897   58398 node_ready.go:49] node "embed-certs-274976" has status "Ready":"True"
	I0520 11:46:44.756963   58398 node_ready.go:38] duration metric: took 7.005828991s for node "embed-certs-274976" to be "Ready" ...
	I0520 11:46:44.756978   58398 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:44.767584   58398 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:44.774049   58398 pod_ready.go:92] pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:44.774078   58398 pod_ready.go:81] duration metric: took 6.468023ms for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:44.774099   58398 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.795658   58398 pod_ready.go:92] pod "etcd-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:46.795691   58398 pod_ready.go:81] duration metric: took 2.021582833s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.795705   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.810711   58398 pod_ready.go:92] pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:46.810739   58398 pod_ready.go:81] duration metric: took 15.025215ms for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.810751   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.760855   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:46.761534   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:46.761579   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:46.761484   59753 retry.go:31] will retry after 497.075791ms: waiting for machine to come up
	I0520 11:46:47.260059   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:47.260542   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:47.260580   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:47.260479   59753 retry.go:31] will retry after 910.980097ms: waiting for machine to come up
	I0520 11:46:48.173677   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:48.174161   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:48.174189   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:48.174108   59753 retry.go:31] will retry after 931.969789ms: waiting for machine to come up
	I0520 11:46:49.107672   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:49.108241   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:49.108271   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:49.108194   59753 retry.go:31] will retry after 1.075042352s: waiting for machine to come up
	I0520 11:46:50.184518   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:50.185078   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:50.185108   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:50.185025   59753 retry.go:31] will retry after 1.174646184s: waiting for machine to come up
	I0520 11:46:46.967320   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.466459   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.966641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.467404   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.966845   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.966601   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.466616   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.967389   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:51.466668   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.010637   58840 crio.go:462] duration metric: took 1.535717978s to copy over tarball
	I0520 11:46:47.010719   58840 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:49.308712   58840 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.297962166s)
	I0520 11:46:49.308747   58840 crio.go:469] duration metric: took 2.298080001s to extract the tarball
	I0520 11:46:49.308756   58840 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:49.346851   58840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:49.393090   58840 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:46:49.393118   58840 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:46:49.393126   58840 kubeadm.go:928] updating node { 192.168.61.221 8444 v1.30.1 crio true true} ...
	I0520 11:46:49.393247   58840 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-668445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:49.393326   58840 ssh_runner.go:195] Run: crio config
	I0520 11:46:49.438553   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:46:49.438575   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:49.438592   58840 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:49.438618   58840 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.221 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-668445 NodeName:default-k8s-diff-port-668445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:46:49.438792   58840 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-668445"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:49.438876   58840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:46:49.449415   58840 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:49.449487   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:49.460951   58840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0520 11:46:49.479654   58840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:49.497494   58840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0520 11:46:49.518474   58840 ssh_runner.go:195] Run: grep 192.168.61.221	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:49.522598   58840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:49.537237   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:49.686022   58840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:49.712850   58840 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445 for IP: 192.168.61.221
	I0520 11:46:49.712871   58840 certs.go:194] generating shared ca certs ...
	I0520 11:46:49.712885   58840 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:49.713065   58840 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:49.713110   58840 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:49.713123   58840 certs.go:256] generating profile certs ...
	I0520 11:46:49.713261   58840 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/client.key
	I0520 11:46:49.713346   58840 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.key.50005da2
	I0520 11:46:49.713390   58840 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.key
	I0520 11:46:49.713548   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:49.713583   58840 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:49.713597   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:49.713629   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:49.713656   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:49.713686   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:49.713744   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:49.714370   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:49.751367   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:49.787162   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:49.822660   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:49.863401   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 11:46:49.895295   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:49.928089   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:49.952687   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:46:49.982562   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:50.007234   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:50.031099   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:50.055146   58840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:50.073066   58840 ssh_runner.go:195] Run: openssl version
	I0520 11:46:50.080951   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:50.092134   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.097436   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.097496   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.103714   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:50.114321   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:50.125104   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.131225   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.131276   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.139113   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:50.153959   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:50.165441   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.170302   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.170349   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.176388   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:50.188372   58840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:50.193168   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:50.199298   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:50.208100   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:50.214704   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:50.220565   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:50.226463   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:50.232400   58840 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:50.232472   58840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:50.232532   58840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:50.271977   58840 cri.go:89] found id: ""
	I0520 11:46:50.272070   58840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:50.283676   58840 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:50.283699   58840 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:50.283706   58840 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:50.283753   58840 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:50.296808   58840 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:50.298406   58840 kubeconfig.go:125] found "default-k8s-diff-port-668445" server: "https://192.168.61.221:8444"
	I0520 11:46:50.300563   58840 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:50.310940   58840 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.221
	I0520 11:46:50.310964   58840 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:50.310975   58840 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:50.311010   58840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:50.361321   58840 cri.go:89] found id: ""
	I0520 11:46:50.361400   58840 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:50.382699   58840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:50.397635   58840 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:50.397657   58840 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:50.397707   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0520 11:46:50.407303   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:50.407364   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:50.418331   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0520 11:46:50.427889   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:50.427938   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:50.437649   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0520 11:46:50.446810   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:50.446861   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:50.456428   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0520 11:46:50.465444   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:50.465512   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:50.478585   58840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:50.489053   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:50.607045   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.346500   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.594241   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.661179   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.751253   58840 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:51.751331   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.318166   58398 pod_ready.go:92] pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.318193   58398 pod_ready.go:81] duration metric: took 1.507434439s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.318207   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.323814   58398 pod_ready.go:92] pod "kube-proxy-6n9d5" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.323850   58398 pod_ready.go:81] duration metric: took 5.632938ms for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.323865   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.356208   58398 pod_ready.go:92] pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.356235   58398 pod_ready.go:81] duration metric: took 32.360735ms for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.356247   58398 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:50.363910   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:52.364530   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:51.360905   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:51.482451   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:51.482493   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:51.361293   59753 retry.go:31] will retry after 1.455979778s: waiting for machine to come up
	I0520 11:46:52.818580   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:52.819134   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:52.819162   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:52.819073   59753 retry.go:31] will retry after 2.771639261s: waiting for machine to come up
	I0520 11:46:55.593581   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:55.594138   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:55.594167   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:55.594089   59753 retry.go:31] will retry after 2.881341208s: waiting for machine to come up
	I0520 11:46:51.967150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.466994   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.967190   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.467334   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.967039   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.466506   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.966828   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.967263   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:56.466949   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.252101   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.752498   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.794795   58840 api_server.go:72] duration metric: took 1.043540264s to wait for apiserver process to appear ...
	I0520 11:46:52.794823   58840 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:46:52.794847   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:52.795569   58840 api_server.go:269] stopped: https://192.168.61.221:8444/healthz: Get "https://192.168.61.221:8444/healthz": dial tcp 192.168.61.221:8444: connect: connection refused
	I0520 11:46:53.294972   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.015833   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:56.015872   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:56.015889   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.053674   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:56.053705   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:56.294988   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.300440   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:56.300474   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:56.795720   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.813192   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:56.813221   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:57.295045   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:57.298962   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:57.298994   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:57.795590   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:57.800795   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 200:
	ok
	I0520 11:46:57.809060   58840 api_server.go:141] control plane version: v1.30.1
	I0520 11:46:57.809084   58840 api_server.go:131] duration metric: took 5.014253865s to wait for apiserver health ...
	I0520 11:46:57.809092   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:46:57.809098   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:57.811020   58840 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:46:54.864263   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:57.363797   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:57.812202   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:46:57.824260   58840 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:46:57.843189   58840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:46:57.851978   58840 system_pods.go:59] 8 kube-system pods found
	I0520 11:46:57.852012   58840 system_pods.go:61] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:46:57.852029   58840 system_pods.go:61] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:46:57.852035   58840 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:46:57.852042   58840 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:46:57.852046   58840 system_pods.go:61] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:46:57.852055   58840 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:46:57.852063   58840 system_pods.go:61] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:46:57.852067   58840 system_pods.go:61] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:46:57.852073   58840 system_pods.go:74] duration metric: took 8.872046ms to wait for pod list to return data ...
	I0520 11:46:57.852084   58840 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:46:57.855020   58840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:46:57.855043   58840 node_conditions.go:123] node cpu capacity is 2
	I0520 11:46:57.855052   58840 node_conditions.go:105] duration metric: took 2.95984ms to run NodePressure ...
	I0520 11:46:57.855065   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:58.124410   58840 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:46:58.129201   58840 kubeadm.go:733] kubelet initialised
	I0520 11:46:58.129222   58840 kubeadm.go:734] duration metric: took 4.789292ms waiting for restarted kubelet to initialise ...
	I0520 11:46:58.129229   58840 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:58.134883   58840 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.139522   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.139557   58840 pod_ready.go:81] duration metric: took 4.638772ms for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.139569   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.139583   58840 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.143636   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.143657   58840 pod_ready.go:81] duration metric: took 4.061011ms for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.143669   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.143677   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.147479   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.147499   58840 pod_ready.go:81] duration metric: took 3.813494ms for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.147510   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.147517   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.247677   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.247710   58840 pod_ready.go:81] duration metric: took 100.183226ms for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.247723   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.247730   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.648289   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-proxy-7ddtk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.648339   58840 pod_ready.go:81] duration metric: took 400.599602ms for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.648352   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-proxy-7ddtk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.648365   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:59.047071   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.047099   58840 pod_ready.go:81] duration metric: took 398.721478ms for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:59.047108   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.047114   58840 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:59.447840   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.447866   58840 pod_ready.go:81] duration metric: took 400.738125ms for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:59.447876   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.447885   58840 pod_ready.go:38] duration metric: took 1.318647527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:59.447899   58840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:46:59.459985   58840 ops.go:34] apiserver oom_adj: -16
	I0520 11:46:59.460011   58840 kubeadm.go:591] duration metric: took 9.176298972s to restartPrimaryControlPlane
	I0520 11:46:59.460023   58840 kubeadm.go:393] duration metric: took 9.227629245s to StartCluster
	I0520 11:46:59.460038   58840 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:59.460120   58840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:59.461897   58840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:59.462172   58840 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:46:59.464109   58840 out.go:177] * Verifying Kubernetes components...
	I0520 11:46:59.462263   58840 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:46:59.462375   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:59.465582   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:59.464212   58840 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465645   58840 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.465657   58840 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:46:59.465693   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.464218   58840 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465752   58840 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.465768   58840 addons.go:243] addon metrics-server should already be in state true
	I0520 11:46:59.465794   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.464219   58840 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465877   58840 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-668445"
	I0520 11:46:59.466087   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466116   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.466210   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466216   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466232   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.466235   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.481963   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0520 11:46:59.482030   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45203
	I0520 11:46:59.482551   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.482768   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.483115   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.483140   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.483495   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.483516   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.483759   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.483930   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.483979   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.484484   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.484509   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.485272   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0520 11:46:59.485657   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.486686   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.486716   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.487080   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.487552   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.487588   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.487989   58840 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.488007   58840 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:46:59.488037   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.488349   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.488384   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.500796   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0520 11:46:59.501376   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.501793   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0520 11:46:59.502058   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.502098   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.502308   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.502439   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.502661   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.502737   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0520 11:46:59.502976   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.503009   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.503199   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.503329   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.503548   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.503648   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.503659   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.504278   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.504632   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.504814   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.506755   58840 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:46:59.504875   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.505278   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.508201   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:46:59.508219   58840 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:46:59.508238   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.509465   58840 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:59.510750   58840 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:59.510765   58840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:46:59.510778   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.511023   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.511444   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.511480   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.511707   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.511939   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.512117   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.512271   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.513612   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.513980   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.514009   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.514207   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.514354   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.514460   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.514564   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.523003   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I0520 11:46:59.523361   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.523835   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.523858   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.524115   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.524266   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.525489   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.525661   58840 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:59.525673   58840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:46:59.525685   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.527762   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.528064   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.528087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.528157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.528309   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.528414   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.528559   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.652795   58840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:59.672785   58840 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-668445" to be "Ready" ...
	I0520 11:46:59.759683   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:59.760767   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:59.775512   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:46:59.775536   58840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:46:59.807608   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:46:59.807633   58840 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:46:59.835597   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:59.835624   58840 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:46:59.862717   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:47:00.145002   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.145032   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.145352   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.145352   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.145372   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.145380   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.145389   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.145645   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.145700   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.145717   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.150992   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.151008   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.151358   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.151376   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.151385   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.802987   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803016   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803030   58840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.042235492s)
	I0520 11:47:00.803063   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803081   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803354   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.803386   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803404   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803422   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803424   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803434   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803439   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803449   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803464   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803621   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803631   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803642   58840 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-668445"
	I0520 11:47:00.803677   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.803729   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803743   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.805628   58840 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0520 11:46:58.478246   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:58.478733   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:58.478765   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:58.478670   59753 retry.go:31] will retry after 3.220647355s: waiting for machine to come up
	I0520 11:46:56.966878   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.966485   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.466861   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.967023   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.467329   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.966889   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.466799   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.966521   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:01.466922   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.806818   58840 addons.go:505] duration metric: took 1.344560138s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0520 11:47:01.675861   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.863455   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:01.863859   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:01.702053   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.702503   57519 main.go:141] libmachine: (no-preload-814599) Found IP for machine: 192.168.39.185
	I0520 11:47:01.702519   57519 main.go:141] libmachine: (no-preload-814599) Reserving static IP address...
	I0520 11:47:01.702532   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has current primary IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.702964   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.702994   57519 main.go:141] libmachine: (no-preload-814599) Reserved static IP address: 192.168.39.185
	I0520 11:47:01.703017   57519 main.go:141] libmachine: (no-preload-814599) DBG | skip adding static IP to network mk-no-preload-814599 - found existing host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"}
	I0520 11:47:01.703036   57519 main.go:141] libmachine: (no-preload-814599) DBG | Getting to WaitForSSH function...
	I0520 11:47:01.703048   57519 main.go:141] libmachine: (no-preload-814599) Waiting for SSH to be available...
	I0520 11:47:01.705097   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.705376   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.705409   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.705482   57519 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH client type: external
	I0520 11:47:01.705511   57519 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa (-rw-------)
	I0520 11:47:01.705543   57519 main.go:141] libmachine: (no-preload-814599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:47:01.705558   57519 main.go:141] libmachine: (no-preload-814599) DBG | About to run SSH command:
	I0520 11:47:01.705593   57519 main.go:141] libmachine: (no-preload-814599) DBG | exit 0
	I0520 11:47:01.833182   57519 main.go:141] libmachine: (no-preload-814599) DBG | SSH cmd err, output: <nil>: 
	I0520 11:47:01.833551   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetConfigRaw
	I0520 11:47:01.834149   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:01.836532   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.837008   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.837047   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.837310   57519 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json ...
	I0520 11:47:01.837509   57519 machine.go:94] provisionDockerMachine start ...
	I0520 11:47:01.837527   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:01.837717   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:01.839814   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.840151   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.840171   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.840335   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:01.840485   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.840631   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.840781   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:01.840958   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:01.841138   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:01.841149   57519 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:47:01.953190   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:47:01.953221   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:01.953480   57519 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:47:01.953504   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:01.953667   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:01.956155   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.956487   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.956518   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.956684   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:01.956884   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.957071   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.957239   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:01.957416   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:01.957580   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:01.957593   57519 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-814599 && echo "no-preload-814599" | sudo tee /etc/hostname
	I0520 11:47:02.079993   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-814599
	
	I0520 11:47:02.080018   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.082623   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.082950   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.082984   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.083154   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.083353   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.083520   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.083677   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.083852   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:02.084014   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:02.084029   57519 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-814599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-814599/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-814599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:47:02.204190   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:47:02.204225   57519 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:47:02.204252   57519 buildroot.go:174] setting up certificates
	I0520 11:47:02.204264   57519 provision.go:84] configureAuth start
	I0520 11:47:02.204280   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:02.204562   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:02.207132   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.207477   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.207501   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.207633   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.209905   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.210212   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.210231   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.210414   57519 provision.go:143] copyHostCerts
	I0520 11:47:02.210472   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:47:02.210493   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:47:02.210560   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:47:02.210743   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:47:02.210758   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:47:02.210795   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:47:02.210875   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:47:02.210883   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:47:02.210905   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:47:02.210962   57519 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.no-preload-814599 san=[127.0.0.1 192.168.39.185 localhost minikube no-preload-814599]
	I0520 11:47:02.411383   57519 provision.go:177] copyRemoteCerts
	I0520 11:47:02.411436   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:47:02.411458   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.414071   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.414443   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.414473   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.414675   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.414859   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.415006   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.415125   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:02.507148   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:47:02.536944   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0520 11:47:02.562291   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:47:02.589034   57519 provision.go:87] duration metric: took 384.752486ms to configureAuth
	I0520 11:47:02.589092   57519 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:47:02.589320   57519 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:47:02.589412   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.592287   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.592704   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.592731   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.592957   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.593180   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.593368   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.593561   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.593724   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:02.593910   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:02.593931   57519 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:47:02.899943   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:47:02.900037   57519 machine.go:97] duration metric: took 1.062511085s to provisionDockerMachine
	I0520 11:47:02.900074   57519 start.go:293] postStartSetup for "no-preload-814599" (driver="kvm2")
	I0520 11:47:02.900112   57519 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:47:02.900155   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:02.900546   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:47:02.900581   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.903643   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.904057   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.904089   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.904253   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.904433   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.904580   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.904710   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:02.999752   57519 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:47:03.005832   57519 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:47:03.005856   57519 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:47:03.005940   57519 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:47:03.006029   57519 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:47:03.006145   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:47:03.016853   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:47:03.049845   57519 start.go:296] duration metric: took 149.725ms for postStartSetup
	I0520 11:47:03.049886   57519 fix.go:56] duration metric: took 19.183954619s for fixHost
	I0520 11:47:03.049912   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.052841   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.053175   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.053203   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.053342   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.053537   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.053730   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.053899   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.054048   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:03.054227   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:03.054241   57519 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:47:03.166461   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205623.121199957
	
	I0520 11:47:03.166490   57519 fix.go:216] guest clock: 1716205623.121199957
	I0520 11:47:03.166500   57519 fix.go:229] Guest: 2024-05-20 11:47:03.121199957 +0000 UTC Remote: 2024-05-20 11:47:03.049891315 +0000 UTC m=+366.742654246 (delta=71.308642ms)
	I0520 11:47:03.166523   57519 fix.go:200] guest clock delta is within tolerance: 71.308642ms
	I0520 11:47:03.166530   57519 start.go:83] releasing machines lock for "no-preload-814599", held for 19.30063132s
	I0520 11:47:03.166554   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.166858   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:03.169486   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.169816   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.169861   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.169990   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170546   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170728   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170814   57519 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:47:03.170875   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.170920   57519 ssh_runner.go:195] Run: cat /version.json
	I0520 11:47:03.170944   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.173717   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.173944   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174128   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.174153   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174338   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.174365   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174371   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.174570   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.174571   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.174781   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.174793   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.174956   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.175049   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:03.175169   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	W0520 11:47:03.276456   57519 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:47:03.276537   57519 ssh_runner.go:195] Run: systemctl --version
	I0520 11:47:03.282946   57519 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:47:03.428948   57519 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:47:03.436462   57519 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:47:03.436529   57519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:47:03.453022   57519 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:47:03.453045   57519 start.go:494] detecting cgroup driver to use...
	I0520 11:47:03.453103   57519 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:47:03.469505   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:47:03.483617   57519 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:47:03.483673   57519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:47:03.498514   57519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:47:03.513169   57519 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:47:03.638355   57519 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:47:03.819407   57519 docker.go:233] disabling docker service ...
	I0520 11:47:03.819473   57519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:47:03.837928   57519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:47:03.853747   57519 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:47:03.976480   57519 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:47:04.097685   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:47:04.112917   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:47:04.132979   57519 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:47:04.133044   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.145501   57519 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:47:04.145594   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.157957   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.168774   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.180521   57519 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:47:04.191739   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.203232   57519 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.221386   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.231999   57519 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:47:04.243401   57519 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:47:04.243447   57519 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:47:04.259232   57519 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:47:04.270945   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:47:04.387908   57519 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:47:04.526507   57519 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:47:04.526588   57519 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:47:04.531568   57519 start.go:562] Will wait 60s for crictl version
	I0520 11:47:04.531628   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:04.535751   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:47:04.583384   57519 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:47:04.583470   57519 ssh_runner.go:195] Run: crio --version
	I0520 11:47:04.617545   57519 ssh_runner.go:195] Run: crio --version
	I0520 11:47:04.652685   57519 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:47:04.654199   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:04.657274   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:04.657624   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:04.657653   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:04.657858   57519 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 11:47:04.662344   57519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:47:04.676310   57519 kubeadm.go:877] updating cluster {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:47:04.676437   57519 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:47:04.676480   57519 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:47:04.723427   57519 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:47:04.723452   57519 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:47:04.723502   57519 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:04.723705   57519 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:04.723729   57519 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:04.723773   57519 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:04.723829   57519 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:04.723896   57519 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:04.723913   57519 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 11:47:04.723767   57519 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 11:47:04.725657   57519 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:04.725657   57519 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:04.725737   57519 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:04.725630   57519 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:04.725696   57519 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.849687   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.896118   57519 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0520 11:47:04.896166   57519 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.896214   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:04.901457   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.940481   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 11:47:04.940581   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.946097   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0520 11:47:04.946120   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.946168   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.958539   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:05.058882   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:05.059155   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:05.061672   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0520 11:47:05.082727   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:05.087840   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:05.606193   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:01.966435   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.467352   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.967335   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.467049   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.466715   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.966960   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.466641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.966993   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:06.466684   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.677231   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:47:06.176977   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:47:06.677184   58840 node_ready.go:49] node "default-k8s-diff-port-668445" has status "Ready":"True"
	I0520 11:47:06.677208   58840 node_ready.go:38] duration metric: took 7.004390787s for node "default-k8s-diff-port-668445" to be "Ready" ...
	I0520 11:47:06.677217   58840 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:47:06.684890   58840 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.690323   58840 pod_ready.go:92] pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.690345   58840 pod_ready.go:81] duration metric: took 5.420403ms for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.690354   58840 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.695944   58840 pod_ready.go:92] pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.695968   58840 pod_ready.go:81] duration metric: took 5.606311ms for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.695979   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.701932   58840 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.701953   58840 pod_ready.go:81] duration metric: took 5.965003ms for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.701966   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:04.363354   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:06.363448   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:08.125249   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1: (3.166660678s)
	I0520 11:47:08.125272   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (3.179080546s)
	I0520 11:47:08.125294   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0520 11:47:08.125314   57519 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0520 11:47:08.125343   57519 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:08.125346   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1: (3.066428315s)
	I0520 11:47:08.125383   57519 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0520 11:47:08.125398   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125415   57519 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:08.125415   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (3.066238572s)
	I0520 11:47:08.125472   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125517   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9: (3.063820229s)
	I0520 11:47:08.125478   57519 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0520 11:47:08.125582   57519 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:08.125596   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1: (3.04284016s)
	I0520 11:47:08.125612   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125637   57519 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0520 11:47:08.125659   57519 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:08.125668   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1: (3.037804525s)
	I0520 11:47:08.125696   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125707   57519 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0520 11:47:08.125709   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.519483497s)
	I0520 11:47:08.125727   57519 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:08.125758   57519 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0520 11:47:08.125775   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125808   57519 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:08.125843   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.140377   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:08.140435   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:08.140472   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:08.140518   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:08.140550   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:08.140604   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:08.277671   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 11:47:08.277785   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.286159   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 11:47:08.286163   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 11:47:08.286261   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:08.286297   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:08.287554   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0520 11:47:08.287623   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:08.300162   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 11:47:08.300183   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0520 11:47:08.300197   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.300203   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0520 11:47:08.300221   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0520 11:47:08.300235   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0520 11:47:08.300168   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 11:47:08.300245   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.300254   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:08.300315   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:08.304669   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0520 11:47:10.580290   57519 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.279947927s)
	I0520 11:47:10.580392   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0520 11:47:10.580421   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.280153203s)
	I0520 11:47:10.580444   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0520 11:47:10.580466   57519 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:10.580514   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:06.966995   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.467261   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.967171   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.467104   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.967112   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.466446   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.967351   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:10.467225   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:10.467292   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:10.507451   58316 cri.go:89] found id: ""
	I0520 11:47:10.507477   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.507487   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:10.507494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:10.507557   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:10.548291   58316 cri.go:89] found id: ""
	I0520 11:47:10.548317   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.548329   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:10.548337   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:10.548401   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:10.592870   58316 cri.go:89] found id: ""
	I0520 11:47:10.592899   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.592909   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:10.592916   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:10.592997   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:10.637344   58316 cri.go:89] found id: ""
	I0520 11:47:10.637376   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.637387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:10.637395   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:10.637466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:10.679759   58316 cri.go:89] found id: ""
	I0520 11:47:10.679794   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.679802   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:10.679807   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:10.679868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:10.720179   58316 cri.go:89] found id: ""
	I0520 11:47:10.720215   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.720250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:10.720259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:10.720324   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:10.761279   58316 cri.go:89] found id: ""
	I0520 11:47:10.761308   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.761320   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:10.761327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:10.761383   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:10.799008   58316 cri.go:89] found id: ""
	I0520 11:47:10.799037   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.799048   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:10.799059   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:10.799072   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:10.850507   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:10.850537   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:10.867287   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:10.867317   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:11.007373   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:11.007401   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:11.007420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:11.090037   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:11.090074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:07.717364   58840 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:07.717391   58840 pod_ready.go:81] duration metric: took 1.01541621s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:07.717404   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:08.041632   58840 pod_ready.go:92] pod "kube-proxy-7ddtk" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:08.041657   58840 pod_ready.go:81] duration metric: took 324.24633ms for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:08.041669   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:10.047877   58840 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:08.364366   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:10.366662   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:12.895005   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:11.650206   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.069627914s)
	I0520 11:47:11.650241   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 11:47:11.650265   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:11.650322   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:13.011174   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.360821671s)
	I0520 11:47:13.011203   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0520 11:47:13.011225   57519 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:13.011270   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:13.633150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:13.650425   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:13.650494   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:13.690842   58316 cri.go:89] found id: ""
	I0520 11:47:13.690869   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.690880   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:13.690887   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:13.690947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:13.727462   58316 cri.go:89] found id: ""
	I0520 11:47:13.727492   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.727504   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:13.727511   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:13.727570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:13.765767   58316 cri.go:89] found id: ""
	I0520 11:47:13.765797   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.765807   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:13.765816   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:13.765877   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:13.800210   58316 cri.go:89] found id: ""
	I0520 11:47:13.800239   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.800248   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:13.800255   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:13.800316   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:13.836481   58316 cri.go:89] found id: ""
	I0520 11:47:13.836513   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.836525   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:13.836533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:13.836601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:13.881888   58316 cri.go:89] found id: ""
	I0520 11:47:13.881914   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.881923   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:13.881932   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:13.881990   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:13.916896   58316 cri.go:89] found id: ""
	I0520 11:47:13.916955   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.916966   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:13.916973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:13.917037   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:13.950345   58316 cri.go:89] found id: ""
	I0520 11:47:13.950372   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.950381   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:13.950391   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:13.950415   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:14.039076   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:14.039114   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:14.086767   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:14.086794   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:14.140377   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:14.140418   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:14.158036   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:14.158074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:14.255195   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:12.048869   58840 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:12.048894   58840 pod_ready.go:81] duration metric: took 4.007218109s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:12.048903   58840 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:14.057711   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:16.555471   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:15.365437   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:17.864290   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:16.885885   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.874589428s)
	I0520 11:47:16.885917   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0520 11:47:16.885952   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:16.886034   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:18.674100   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.788037677s)
	I0520 11:47:18.674128   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0520 11:47:18.674149   57519 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:18.674197   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:20.543671   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.86944462s)
	I0520 11:47:20.543705   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0520 11:47:20.543733   57519 cache_images.go:123] Successfully loaded all cached images
	I0520 11:47:20.543739   57519 cache_images.go:92] duration metric: took 15.820273224s to LoadCachedImages
	I0520 11:47:20.543749   57519 kubeadm.go:928] updating node { 192.168.39.185 8443 v1.30.1 crio true true} ...
	I0520 11:47:20.543863   57519 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-814599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:47:20.543944   57519 ssh_runner.go:195] Run: crio config
	I0520 11:47:20.593647   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:47:20.593672   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:47:20.593692   57519 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:47:20.593719   57519 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-814599 NodeName:no-preload-814599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:47:20.593863   57519 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-814599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:47:20.593936   57519 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:47:20.604739   57519 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:47:20.604791   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:47:20.614029   57519 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 11:47:20.630587   57519 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:47:20.647296   57519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0520 11:47:20.664453   57519 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0520 11:47:20.668296   57519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:47:20.681086   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:47:20.816542   57519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:47:20.834344   57519 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599 for IP: 192.168.39.185
	I0520 11:47:20.834369   57519 certs.go:194] generating shared ca certs ...
	I0520 11:47:20.834388   57519 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:47:20.834578   57519 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:47:20.834634   57519 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:47:20.834649   57519 certs.go:256] generating profile certs ...
	I0520 11:47:20.834777   57519 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.key
	I0520 11:47:20.834876   57519 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5
	I0520 11:47:20.834928   57519 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key
	I0520 11:47:20.835090   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:47:20.835146   57519 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:47:20.835159   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:47:20.835192   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:47:20.835226   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:47:20.835255   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:47:20.835314   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:47:20.836325   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:47:20.883227   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:47:20.918566   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:47:20.959097   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:47:21.006422   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 11:47:21.035717   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:47:21.071087   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:47:21.094330   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:47:21.119597   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:47:21.145488   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:47:21.173152   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:47:21.197627   57519 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:47:21.214595   57519 ssh_runner.go:195] Run: openssl version
	I0520 11:47:21.220678   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:47:21.232251   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.236953   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.237015   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.243048   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:47:21.253667   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:47:21.264223   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.268607   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.268648   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.274112   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:47:21.284427   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:47:21.294811   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.299403   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.299455   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.304954   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:47:21.315320   57519 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:47:21.319956   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:47:21.325692   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:47:21.331232   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:47:21.337240   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:47:16.755906   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:16.772659   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:16.772736   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:16.821927   58316 cri.go:89] found id: ""
	I0520 11:47:16.821954   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.821964   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:16.821971   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:16.822033   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:16.863636   58316 cri.go:89] found id: ""
	I0520 11:47:16.863663   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.863673   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:16.863681   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:16.863749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:16.900524   58316 cri.go:89] found id: ""
	I0520 11:47:16.900553   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.900561   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:16.900566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:16.900625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:16.949484   58316 cri.go:89] found id: ""
	I0520 11:47:16.949517   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.949527   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:16.949535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:16.949597   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:16.994006   58316 cri.go:89] found id: ""
	I0520 11:47:16.994035   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.994045   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:16.994054   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:16.994111   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:17.030100   58316 cri.go:89] found id: ""
	I0520 11:47:17.030130   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.030141   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:17.030147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:17.030195   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:17.069067   58316 cri.go:89] found id: ""
	I0520 11:47:17.069089   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.069097   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:17.069102   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:17.069146   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:17.105102   58316 cri.go:89] found id: ""
	I0520 11:47:17.105131   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.105141   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:17.105151   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:17.105165   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:17.159912   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:17.159937   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:17.212192   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:17.212227   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:17.227761   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:17.227802   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:17.305724   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:17.305753   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:17.305770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:19.886189   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:19.900397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:19.900475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:19.940473   58316 cri.go:89] found id: ""
	I0520 11:47:19.940502   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.940509   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:19.940533   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:19.940582   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:19.985561   58316 cri.go:89] found id: ""
	I0520 11:47:19.985594   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.985605   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:19.985611   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:19.985676   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:20.040865   58316 cri.go:89] found id: ""
	I0520 11:47:20.040895   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.040906   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:20.040913   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:20.040991   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:20.086122   58316 cri.go:89] found id: ""
	I0520 11:47:20.086149   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.086159   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:20.086167   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:20.086228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:20.122905   58316 cri.go:89] found id: ""
	I0520 11:47:20.122931   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.122938   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:20.122943   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:20.123055   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:20.165966   58316 cri.go:89] found id: ""
	I0520 11:47:20.165993   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.166004   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:20.166012   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:20.166064   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:20.206010   58316 cri.go:89] found id: ""
	I0520 11:47:20.206043   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.206054   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:20.206062   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:20.206126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:20.243540   58316 cri.go:89] found id: ""
	I0520 11:47:20.243565   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.243573   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:20.243582   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:20.243595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:20.327278   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:20.327298   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:20.327313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:20.423064   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:20.423096   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:20.467707   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:20.467742   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:20.522746   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:20.522782   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:18.556589   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:20.557476   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:20.363536   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:22.862952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:21.342881   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:47:21.349172   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:47:21.354721   57519 kubeadm.go:391] StartCluster: {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:47:21.354839   57519 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:47:21.354923   57519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:47:21.399473   57519 cri.go:89] found id: ""
	I0520 11:47:21.399554   57519 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:47:21.413632   57519 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:47:21.413656   57519 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:47:21.413662   57519 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:47:21.413712   57519 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:47:21.425556   57519 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:47:21.426515   57519 kubeconfig.go:125] found "no-preload-814599" server: "https://192.168.39.185:8443"
	I0520 11:47:21.428556   57519 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:47:21.438628   57519 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.185
	I0520 11:47:21.438655   57519 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:47:21.438668   57519 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:47:21.438708   57519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:47:21.485080   57519 cri.go:89] found id: ""
	I0520 11:47:21.485149   57519 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:47:21.502591   57519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:47:21.512148   57519 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:47:21.512168   57519 kubeadm.go:156] found existing configuration files:
	
	I0520 11:47:21.512208   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:47:21.521378   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:47:21.521434   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:47:21.531568   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:47:21.541404   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:47:21.541459   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:47:21.552018   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:47:21.562667   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:47:21.562711   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:47:21.572240   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:47:21.581472   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:47:21.581535   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:47:21.591072   57519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:47:21.601114   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:21.716435   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.597273   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.792224   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.866757   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.999294   57519 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:47:22.999389   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.499642   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:24.000424   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:24.022646   57519 api_server.go:72] duration metric: took 1.023349447s to wait for apiserver process to appear ...
	I0520 11:47:24.022673   57519 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:47:24.022691   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:24.023162   57519 api_server.go:269] stopped: https://192.168.39.185:8443/healthz: Get "https://192.168.39.185:8443/healthz": dial tcp 192.168.39.185:8443: connect: connection refused
	I0520 11:47:24.523760   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:23.037165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.051438   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:23.051490   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:23.088156   58316 cri.go:89] found id: ""
	I0520 11:47:23.088186   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.088197   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:23.088204   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:23.088266   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:23.123357   58316 cri.go:89] found id: ""
	I0520 11:47:23.123384   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.123394   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:23.123401   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:23.123461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:23.159579   58316 cri.go:89] found id: ""
	I0520 11:47:23.159612   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.159622   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:23.159630   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:23.159694   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:23.195471   58316 cri.go:89] found id: ""
	I0520 11:47:23.195507   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.195518   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:23.195525   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:23.195584   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:23.231121   58316 cri.go:89] found id: ""
	I0520 11:47:23.231148   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.231159   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:23.231165   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:23.231225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:23.267809   58316 cri.go:89] found id: ""
	I0520 11:47:23.267833   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.267841   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:23.267847   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:23.267897   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:23.306589   58316 cri.go:89] found id: ""
	I0520 11:47:23.306608   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.306615   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:23.306621   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:23.306678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:23.339285   58316 cri.go:89] found id: ""
	I0520 11:47:23.339308   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.339317   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:23.339329   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:23.339343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:23.406157   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:23.406196   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:23.420874   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:23.420911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:23.503049   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:23.503077   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:23.503092   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:23.585189   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:23.585222   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:26.130594   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:26.146278   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:26.146366   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:26.189721   58316 cri.go:89] found id: ""
	I0520 11:47:26.189747   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.189754   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:26.189760   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:26.189809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:26.228060   58316 cri.go:89] found id: ""
	I0520 11:47:26.228084   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.228098   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:26.228104   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:26.228171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:26.262360   58316 cri.go:89] found id: ""
	I0520 11:47:26.262390   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.262400   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:26.262407   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:26.262471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:26.297981   58316 cri.go:89] found id: ""
	I0520 11:47:26.298010   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.298022   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:26.298029   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:26.298094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:26.339873   58316 cri.go:89] found id: ""
	I0520 11:47:26.339901   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.339911   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:26.339918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:26.339983   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:26.376366   58316 cri.go:89] found id: ""
	I0520 11:47:26.376393   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.376404   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:26.376411   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:26.376471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:26.410803   58316 cri.go:89] found id: ""
	I0520 11:47:26.410839   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.410851   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:26.410859   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:26.410921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:26.445392   58316 cri.go:89] found id: ""
	I0520 11:47:26.445420   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.445432   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:26.445443   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:26.445458   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:26.499244   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:26.499276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:26.513224   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:26.513259   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:26.589623   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:26.589647   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:26.589661   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:26.676875   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:26.676918   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:23.055887   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:25.555044   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:24.863476   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:27.363148   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:27.158548   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:47:27.158572   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:47:27.158585   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:27.205830   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:47:27.205867   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:47:27.523283   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:27.527618   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:27.527643   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:28.023238   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:28.029601   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:28.029630   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:28.522849   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:28.528350   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:28.528372   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:29.023422   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:29.027460   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:29.027493   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:29.523739   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:29.528289   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:29.528317   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:30.022859   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:30.027324   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:30.027348   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:30.522929   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:30.527596   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:47:30.534571   57519 api_server.go:141] control plane version: v1.30.1
	I0520 11:47:30.534592   57519 api_server.go:131] duration metric: took 6.511912744s to wait for apiserver health ...
	I0520 11:47:30.534601   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:47:30.534609   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:47:30.536647   57519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:47:30.537896   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:47:30.548779   57519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:47:30.568124   57519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:47:30.577489   57519 system_pods.go:59] 8 kube-system pods found
	I0520 11:47:30.577522   57519 system_pods.go:61] "coredns-7db6d8ff4d-tg5bl" [ec723a9c-933c-4d25-9b56-e39a78adbbb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:47:30.577530   57519 system_pods.go:61] "etcd-no-preload-814599" [d0e5ca15-cc81-4dc2-9bc4-0e9b88c7b49d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:47:30.577538   57519 system_pods.go:61] "kube-apiserver-no-preload-814599" [81f50182-5190-4168-b97a-f5a7a6348247] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:47:30.577544   57519 system_pods.go:61] "kube-controller-manager-no-preload-814599" [871504b1-6e24-498b-9563-241b7a7422d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:47:30.577550   57519 system_pods.go:61] "kube-proxy-mqg49" [0f3b3b30-9789-4a0b-b636-e6db16e1ce0d] Running
	I0520 11:47:30.577560   57519 system_pods.go:61] "kube-scheduler-no-preload-814599" [028942b5-5187-4b86-84ba-eba2cdb7a844] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:47:30.577567   57519 system_pods.go:61] "metrics-server-569cc877fc-v2zh9" [1be844eb-66ba-4a82-86fb-6f53de7e4d33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:47:30.577577   57519 system_pods.go:61] "storage-provisioner" [84c706cd-0245-4afa-bb7b-4644bd2784bd] Running
	I0520 11:47:30.577586   57519 system_pods.go:74] duration metric: took 9.439264ms to wait for pod list to return data ...
	I0520 11:47:30.577598   57519 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:47:30.581346   57519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:47:30.581367   57519 node_conditions.go:123] node cpu capacity is 2
	I0520 11:47:30.581377   57519 node_conditions.go:105] duration metric: took 3.772205ms to run NodePressure ...
	I0520 11:47:30.581398   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:30.849703   57519 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:47:30.853895   57519 kubeadm.go:733] kubelet initialised
	I0520 11:47:30.853917   57519 kubeadm.go:734] duration metric: took 4.191303ms waiting for restarted kubelet to initialise ...
	I0520 11:47:30.853925   57519 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:47:30.861807   57519 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.866768   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.866797   57519 pod_ready.go:81] duration metric: took 4.964994ms for pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.866809   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.866819   57519 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.871001   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "etcd-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.871027   57519 pod_ready.go:81] duration metric: took 4.200278ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.871038   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "etcd-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.871046   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.875615   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "kube-apiserver-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.875635   57519 pod_ready.go:81] duration metric: took 4.580378ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.875645   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "kube-apiserver-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.875655   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.971186   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.971210   57519 pod_ready.go:81] duration metric: took 95.545422ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.971219   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.971226   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:29.250626   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:29.265658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:29.265716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:29.307618   58316 cri.go:89] found id: ""
	I0520 11:47:29.307640   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.307651   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:29.307658   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:29.307717   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:29.351325   58316 cri.go:89] found id: ""
	I0520 11:47:29.351353   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.351364   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:29.351372   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:29.351444   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:29.392415   58316 cri.go:89] found id: ""
	I0520 11:47:29.392439   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.392449   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:29.392455   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:29.392501   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:29.435195   58316 cri.go:89] found id: ""
	I0520 11:47:29.435220   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.435227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:29.435233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:29.435287   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:29.476746   58316 cri.go:89] found id: ""
	I0520 11:47:29.476773   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.476784   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:29.476790   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:29.476875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:29.511513   58316 cri.go:89] found id: ""
	I0520 11:47:29.511545   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.511553   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:29.511559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:29.511621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:29.547563   58316 cri.go:89] found id: ""
	I0520 11:47:29.547583   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.547590   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:29.547598   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:29.547644   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:29.586438   58316 cri.go:89] found id: ""
	I0520 11:47:29.586464   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.586472   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:29.586480   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:29.586491   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:29.660795   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:29.660828   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:29.699538   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:29.699568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:29.751675   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:29.751711   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:29.766924   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:29.766960   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:29.840738   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:27.555356   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:29.556470   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:29.862807   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:32.364158   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:31.372203   57519 pod_ready.go:92] pod "kube-proxy-mqg49" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:31.372223   57519 pod_ready.go:81] duration metric: took 400.985089ms for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:31.372231   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:33.379905   57519 pod_ready.go:102] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:35.378764   57519 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:35.378793   57519 pod_ready.go:81] duration metric: took 4.006551156s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:35.378804   57519 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:32.341347   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:32.355727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:32.355784   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:32.399703   58316 cri.go:89] found id: ""
	I0520 11:47:32.399740   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.399749   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:32.399754   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:32.399813   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:32.440321   58316 cri.go:89] found id: ""
	I0520 11:47:32.440350   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.440362   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:32.440367   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:32.440427   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:32.477813   58316 cri.go:89] found id: ""
	I0520 11:47:32.477840   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.477848   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:32.477853   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:32.477899   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:32.524465   58316 cri.go:89] found id: ""
	I0520 11:47:32.524492   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.524507   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:32.524516   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:32.524574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:32.559287   58316 cri.go:89] found id: ""
	I0520 11:47:32.559316   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.559326   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:32.559334   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:32.559390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:32.595977   58316 cri.go:89] found id: ""
	I0520 11:47:32.596010   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.596022   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:32.596030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:32.596088   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:32.631700   58316 cri.go:89] found id: ""
	I0520 11:47:32.631730   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.631739   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:32.631746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:32.631818   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:32.668027   58316 cri.go:89] found id: ""
	I0520 11:47:32.668062   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.668074   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:32.668084   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:32.668097   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:32.719910   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:32.719940   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:32.733988   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:32.734011   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:32.805852   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:32.805871   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:32.805886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:32.884785   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:32.884814   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:35.425234   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:35.438512   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:35.438583   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:35.471964   58316 cri.go:89] found id: ""
	I0520 11:47:35.471989   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.471997   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:35.472003   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:35.472058   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:35.505663   58316 cri.go:89] found id: ""
	I0520 11:47:35.505695   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.505706   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:35.505714   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:35.505766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:35.540407   58316 cri.go:89] found id: ""
	I0520 11:47:35.540437   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.540447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:35.540454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:35.540516   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:35.576599   58316 cri.go:89] found id: ""
	I0520 11:47:35.576622   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.576629   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:35.576634   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:35.576689   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:35.610257   58316 cri.go:89] found id: ""
	I0520 11:47:35.610286   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.610297   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:35.610304   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:35.610367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:35.647305   58316 cri.go:89] found id: ""
	I0520 11:47:35.647330   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.647338   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:35.647344   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:35.647393   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:35.682149   58316 cri.go:89] found id: ""
	I0520 11:47:35.682180   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.682191   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:35.682198   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:35.682260   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:35.717536   58316 cri.go:89] found id: ""
	I0520 11:47:35.717561   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.717569   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:35.717576   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:35.717588   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:35.768734   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:35.768763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:35.783246   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:35.783276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:35.869299   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:35.869325   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:35.869343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:35.948083   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:35.948116   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:32.056046   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:34.555606   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:36.556231   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:34.862916   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:37.363328   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:37.385861   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:39.884432   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:38.498413   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:38.512549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:38.512627   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:38.545580   58316 cri.go:89] found id: ""
	I0520 11:47:38.545601   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.545608   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:38.545614   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:38.545660   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.583115   58316 cri.go:89] found id: ""
	I0520 11:47:38.583145   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.583156   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:38.583163   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:38.583210   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:38.619548   58316 cri.go:89] found id: ""
	I0520 11:47:38.619578   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.619588   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:38.619596   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:38.619664   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:38.664240   58316 cri.go:89] found id: ""
	I0520 11:47:38.664272   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.664282   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:38.664290   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:38.664347   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:38.703438   58316 cri.go:89] found id: ""
	I0520 11:47:38.703460   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.703468   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:38.703472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:38.703546   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:38.740712   58316 cri.go:89] found id: ""
	I0520 11:47:38.740742   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.740754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:38.740761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:38.740821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:38.785274   58316 cri.go:89] found id: ""
	I0520 11:47:38.785296   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.785305   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:38.785312   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:38.785370   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:38.827757   58316 cri.go:89] found id: ""
	I0520 11:47:38.827787   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.827798   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:38.827810   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:38.827825   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:38.880074   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:38.880105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:38.894643   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:38.894688   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:38.982821   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:38.982849   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:38.982872   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:39.066426   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:39.066457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:41.610439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:41.623306   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:41.623374   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:41.662703   58316 cri.go:89] found id: ""
	I0520 11:47:41.662731   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.662740   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:41.662747   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:41.662803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.557813   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.054616   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:39.863318   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:42.362965   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.893841   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:44.387666   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.699406   58316 cri.go:89] found id: ""
	I0520 11:47:41.699433   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.699443   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:41.699449   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:41.699493   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:41.733448   58316 cri.go:89] found id: ""
	I0520 11:47:41.733475   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.733485   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:41.733492   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:41.733556   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:41.782101   58316 cri.go:89] found id: ""
	I0520 11:47:41.782131   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.782142   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:41.782149   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:41.782213   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:41.828303   58316 cri.go:89] found id: ""
	I0520 11:47:41.828332   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.828343   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:41.828350   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:41.828438   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:41.865690   58316 cri.go:89] found id: ""
	I0520 11:47:41.865712   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.865721   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:41.865726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:41.865773   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:41.907954   58316 cri.go:89] found id: ""
	I0520 11:47:41.907982   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.907992   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:41.907999   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:41.908060   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:41.946853   58316 cri.go:89] found id: ""
	I0520 11:47:41.946879   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.946889   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:41.946901   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:41.946916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:42.001481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:42.001516   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:42.015383   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:42.015412   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:42.088155   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:42.088184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:42.088201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:42.172771   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:42.172801   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:44.714665   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:44.727782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:44.727866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:44.760377   58316 cri.go:89] found id: ""
	I0520 11:47:44.760405   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.760412   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:44.760418   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:44.760466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:44.793488   58316 cri.go:89] found id: ""
	I0520 11:47:44.793509   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.793517   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:44.793522   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:44.793570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:44.827752   58316 cri.go:89] found id: ""
	I0520 11:47:44.827777   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.827786   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:44.827792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:44.827840   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:44.864415   58316 cri.go:89] found id: ""
	I0520 11:47:44.864437   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.864446   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:44.864454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:44.864514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:44.904360   58316 cri.go:89] found id: ""
	I0520 11:47:44.904382   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.904391   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:44.904396   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:44.904452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:44.939969   58316 cri.go:89] found id: ""
	I0520 11:47:44.940000   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.940010   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:44.940019   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:44.940069   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:44.979521   58316 cri.go:89] found id: ""
	I0520 11:47:44.979544   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.979552   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:44.979557   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:44.979621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:45.012946   58316 cri.go:89] found id: ""
	I0520 11:47:45.012979   58316 logs.go:276] 0 containers: []
	W0520 11:47:45.012992   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:45.013002   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:45.013015   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:45.067582   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:45.067619   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:45.081497   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:45.081529   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:45.155312   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:45.155335   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:45.155351   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:45.232403   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:45.232439   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:43.055052   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:45.056688   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:44.863217   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:47.362713   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:46.886365   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.384568   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:47.774002   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:47.789020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:47.789078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:47.826284   58316 cri.go:89] found id: ""
	I0520 11:47:47.826319   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.826331   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:47.826340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:47.826404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:47.866588   58316 cri.go:89] found id: ""
	I0520 11:47:47.866610   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.866616   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:47.866621   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:47.866675   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:47.901556   58316 cri.go:89] found id: ""
	I0520 11:47:47.901582   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.901593   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:47.901600   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:47.901658   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:47.937563   58316 cri.go:89] found id: ""
	I0520 11:47:47.937586   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.937593   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:47.937599   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:47.937656   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:47.975493   58316 cri.go:89] found id: ""
	I0520 11:47:47.975517   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.975527   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:47.975535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:47.975594   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:48.012208   58316 cri.go:89] found id: ""
	I0520 11:47:48.012232   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.012239   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:48.012245   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:48.012303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:48.046951   58316 cri.go:89] found id: ""
	I0520 11:47:48.046979   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.046989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:48.046996   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:48.047057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:48.085611   58316 cri.go:89] found id: ""
	I0520 11:47:48.085639   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.085649   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:48.085659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:48.085673   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:48.159338   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:48.159359   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:48.159373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:48.238332   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:48.238362   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:48.282316   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:48.282346   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:48.333780   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:48.333811   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:50.849007   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:50.863710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:50.863766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:50.900260   58316 cri.go:89] found id: ""
	I0520 11:47:50.900281   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.900287   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:50.900294   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:50.900342   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:50.934796   58316 cri.go:89] found id: ""
	I0520 11:47:50.934817   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.934824   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:50.934830   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:50.934875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:50.972957   58316 cri.go:89] found id: ""
	I0520 11:47:50.972983   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.972992   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:50.973000   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:50.973073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:51.009354   58316 cri.go:89] found id: ""
	I0520 11:47:51.009392   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.009402   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:51.009410   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:51.009467   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:51.046268   58316 cri.go:89] found id: ""
	I0520 11:47:51.046297   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.046308   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:51.046316   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:51.046379   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:51.083513   58316 cri.go:89] found id: ""
	I0520 11:47:51.083538   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.083548   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:51.083554   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:51.083619   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:51.126736   58316 cri.go:89] found id: ""
	I0520 11:47:51.126761   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.126778   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:51.126784   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:51.126841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:51.166149   58316 cri.go:89] found id: ""
	I0520 11:47:51.166176   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.166187   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:51.166196   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:51.166207   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:51.218946   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:51.218989   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:51.233779   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:51.233813   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:51.312266   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:51.312289   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:51.312306   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:51.393884   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:51.393911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:47.056743   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.555489   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.363267   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:51.863214   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:51.385469   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:53.884960   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:55.885430   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:53.939605   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:53.952518   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:53.952585   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:53.995262   58316 cri.go:89] found id: ""
	I0520 11:47:53.995291   58316 logs.go:276] 0 containers: []
	W0520 11:47:53.995302   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:53.995309   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:53.995372   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:54.029357   58316 cri.go:89] found id: ""
	I0520 11:47:54.029379   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.029387   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:54.029393   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:54.029441   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:54.064902   58316 cri.go:89] found id: ""
	I0520 11:47:54.064937   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.064947   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:54.064954   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:54.065014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:54.102763   58316 cri.go:89] found id: ""
	I0520 11:47:54.102788   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.102795   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:54.102801   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:54.102866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:54.136345   58316 cri.go:89] found id: ""
	I0520 11:47:54.136373   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.136384   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:54.136392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:54.136456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:54.173106   58316 cri.go:89] found id: ""
	I0520 11:47:54.173135   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.173144   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:54.173151   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:54.173214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:54.209199   58316 cri.go:89] found id: ""
	I0520 11:47:54.209222   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.209229   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:54.209235   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:54.209301   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:54.247681   58316 cri.go:89] found id: ""
	I0520 11:47:54.247711   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.247722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:54.247731   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:54.247749   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:54.325604   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:54.325623   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:54.325635   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:54.405791   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:54.405832   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:54.445633   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:54.445671   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:54.495395   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:54.495424   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:52.055220   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:54.059922   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:56.554742   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:54.362355   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:56.862720   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:57.886732   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:00.387613   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:57.011083   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:57.025109   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:57.025171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:57.059268   58316 cri.go:89] found id: ""
	I0520 11:47:57.059299   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.059310   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:57.059318   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:57.059371   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:57.093744   58316 cri.go:89] found id: ""
	I0520 11:47:57.093770   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.093780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:57.093788   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:57.093854   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:57.128836   58316 cri.go:89] found id: ""
	I0520 11:47:57.128868   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.128879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:57.128886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:57.128967   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:57.163858   58316 cri.go:89] found id: ""
	I0520 11:47:57.163882   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.163890   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:57.163896   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:57.163951   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:57.198684   58316 cri.go:89] found id: ""
	I0520 11:47:57.198713   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.198724   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:57.198732   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:57.198794   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:57.235121   58316 cri.go:89] found id: ""
	I0520 11:47:57.235147   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.235155   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:57.235161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:57.235208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:57.279621   58316 cri.go:89] found id: ""
	I0520 11:47:57.279641   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.279650   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:57.279655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:57.279701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:57.316292   58316 cri.go:89] found id: ""
	I0520 11:47:57.316319   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.316329   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:57.316338   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:57.316350   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:57.330013   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:57.330037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:57.405736   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:57.405762   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:57.405776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:57.483126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:57.483160   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:57.527345   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:57.527375   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.081615   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:00.094782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:00.094841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:00.133435   58316 cri.go:89] found id: ""
	I0520 11:48:00.133470   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.133486   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:00.133492   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:00.133553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:00.169402   58316 cri.go:89] found id: ""
	I0520 11:48:00.169434   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.169445   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:00.169453   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:00.169515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:00.204702   58316 cri.go:89] found id: ""
	I0520 11:48:00.204728   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.204738   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:00.204745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:00.204803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:00.246341   58316 cri.go:89] found id: ""
	I0520 11:48:00.246373   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.246383   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:00.246391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:00.246452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:00.279937   58316 cri.go:89] found id: ""
	I0520 11:48:00.279961   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.279968   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:00.279973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:00.280021   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:00.314825   58316 cri.go:89] found id: ""
	I0520 11:48:00.314853   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.314864   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:00.314870   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:00.314937   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:00.354431   58316 cri.go:89] found id: ""
	I0520 11:48:00.354460   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.354471   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:00.354478   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:00.354538   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:00.391309   58316 cri.go:89] found id: ""
	I0520 11:48:00.391335   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.391342   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:00.391350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:00.391361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:00.471502   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:00.471535   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:00.508324   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:00.508352   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.559695   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:00.559728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:00.573677   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:00.573709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:00.648102   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:58.558508   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:01.055219   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:58.862939   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:01.362135   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:02.886809   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:04.887274   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:03.148336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:03.162792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:03.162922   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:03.202620   58316 cri.go:89] found id: ""
	I0520 11:48:03.202656   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.202667   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:03.202674   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:03.202732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:03.243434   58316 cri.go:89] found id: ""
	I0520 11:48:03.243460   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.243468   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:03.243474   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:03.243528   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:03.278517   58316 cri.go:89] found id: ""
	I0520 11:48:03.278545   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.278553   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:03.278560   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:03.278624   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:03.317253   58316 cri.go:89] found id: ""
	I0520 11:48:03.317281   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.317291   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:03.317298   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:03.317360   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:03.352394   58316 cri.go:89] found id: ""
	I0520 11:48:03.352416   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.352425   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:03.352431   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:03.352480   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:03.393016   58316 cri.go:89] found id: ""
	I0520 11:48:03.393044   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.393055   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:03.393063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:03.393134   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:03.434436   58316 cri.go:89] found id: ""
	I0520 11:48:03.434461   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.434468   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:03.434474   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:03.434531   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:03.476703   58316 cri.go:89] found id: ""
	I0520 11:48:03.476728   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.476737   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:03.476746   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:03.476761   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:03.529959   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:03.529996   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:03.544741   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:03.544770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:03.614526   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:03.614552   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:03.614568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:03.695336   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:03.695368   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:06.233916   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:06.249026   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:06.249083   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:06.288191   58316 cri.go:89] found id: ""
	I0520 11:48:06.288223   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.288233   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:06.288246   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:06.288307   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:06.328602   58316 cri.go:89] found id: ""
	I0520 11:48:06.328639   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.328647   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:06.328653   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:06.328710   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:06.373030   58316 cri.go:89] found id: ""
	I0520 11:48:06.373053   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.373062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:06.373069   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:06.373130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:06.410520   58316 cri.go:89] found id: ""
	I0520 11:48:06.410542   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.410551   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:06.410559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:06.410625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:06.448468   58316 cri.go:89] found id: ""
	I0520 11:48:06.448493   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.448505   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:06.448511   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:06.448571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:06.507441   58316 cri.go:89] found id: ""
	I0520 11:48:06.507468   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.507477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:06.507485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:06.507545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:06.547174   58316 cri.go:89] found id: ""
	I0520 11:48:06.547201   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.547212   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:06.547220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:06.547283   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:06.580692   58316 cri.go:89] found id: ""
	I0520 11:48:06.580715   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.580722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:06.580730   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:06.580741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:06.634822   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:06.634855   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:06.649107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:06.649132   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:48:03.555849   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:06.059026   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:03.363379   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:05.863603   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:07.386458   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:09.884793   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	W0520 11:48:06.717523   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:06.717548   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:06.717562   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:06.806538   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:06.806571   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:09.347195   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:09.362597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:09.362677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:09.401314   58316 cri.go:89] found id: ""
	I0520 11:48:09.401338   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.401346   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:09.401353   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:09.401411   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:09.436394   58316 cri.go:89] found id: ""
	I0520 11:48:09.436418   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.436426   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:09.436431   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:09.436484   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:09.470407   58316 cri.go:89] found id: ""
	I0520 11:48:09.470437   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.470447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:09.470462   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:09.470514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:09.507720   58316 cri.go:89] found id: ""
	I0520 11:48:09.507747   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.507758   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:09.507763   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:09.507824   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:09.548056   58316 cri.go:89] found id: ""
	I0520 11:48:09.548084   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.548093   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:09.548106   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:09.548172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:09.585187   58316 cri.go:89] found id: ""
	I0520 11:48:09.585214   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.585222   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:09.585227   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:09.585285   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:09.622695   58316 cri.go:89] found id: ""
	I0520 11:48:09.622718   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.622725   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:09.622731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:09.622796   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:09.662751   58316 cri.go:89] found id: ""
	I0520 11:48:09.662775   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.662784   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:09.662792   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:09.662804   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:09.715669   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:09.715709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:09.729301   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:09.729333   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:09.799189   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:09.799213   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:09.799228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:09.885889   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:09.885916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:08.554935   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:10.555580   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:08.362397   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:10.863960   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:12.385193   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:14.385749   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:12.431556   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:12.445470   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:12.445545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:12.482161   58316 cri.go:89] found id: ""
	I0520 11:48:12.482186   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.482195   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:12.482201   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:12.482251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:12.520051   58316 cri.go:89] found id: ""
	I0520 11:48:12.520077   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.520090   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:12.520096   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:12.520143   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:12.559856   58316 cri.go:89] found id: ""
	I0520 11:48:12.559878   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.559886   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:12.559891   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:12.559947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:12.599657   58316 cri.go:89] found id: ""
	I0520 11:48:12.599679   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.599687   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:12.599693   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:12.599739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:12.640514   58316 cri.go:89] found id: ""
	I0520 11:48:12.640539   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.640547   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:12.640552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:12.640601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:12.680337   58316 cri.go:89] found id: ""
	I0520 11:48:12.680367   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.680378   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:12.680384   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:12.680433   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:12.718580   58316 cri.go:89] found id: ""
	I0520 11:48:12.718615   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.718624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:12.718629   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:12.718677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:12.760438   58316 cri.go:89] found id: ""
	I0520 11:48:12.760463   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.760471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:12.760479   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:12.760493   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:12.816564   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:12.816599   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:12.831107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:12.831130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:12.931165   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:12.931184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:12.931199   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:13.035436   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:13.035478   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:15.578999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:15.593185   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:15.593280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:15.629644   58316 cri.go:89] found id: ""
	I0520 11:48:15.629672   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.629682   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:15.629689   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:15.629749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:15.668639   58316 cri.go:89] found id: ""
	I0520 11:48:15.668670   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.668680   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:15.668688   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:15.668752   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:15.712844   58316 cri.go:89] found id: ""
	I0520 11:48:15.712868   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.712876   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:15.712881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:15.712954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:15.748969   58316 cri.go:89] found id: ""
	I0520 11:48:15.748998   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.749008   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:15.749020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:15.749085   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:15.784625   58316 cri.go:89] found id: ""
	I0520 11:48:15.784658   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.784666   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:15.784672   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:15.784724   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:15.825288   58316 cri.go:89] found id: ""
	I0520 11:48:15.825319   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.825326   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:15.825332   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:15.825390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:15.861589   58316 cri.go:89] found id: ""
	I0520 11:48:15.861617   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.861628   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:15.861635   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:15.861691   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:15.897115   58316 cri.go:89] found id: ""
	I0520 11:48:15.897143   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.897155   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:15.897166   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:15.897188   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:15.971286   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:15.971307   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:15.971323   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:16.049567   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:16.049612   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:16.090116   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:16.090145   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:16.140151   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:16.140187   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:13.058312   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:15.556249   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:13.363188   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:15.363224   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:17.861944   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:16.884628   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:18.885335   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:20.887306   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:18.655048   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:18.671663   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:18.671720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:18.721288   58316 cri.go:89] found id: ""
	I0520 11:48:18.721315   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.721333   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:18.721340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:18.721407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:18.765025   58316 cri.go:89] found id: ""
	I0520 11:48:18.765054   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.765064   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:18.765069   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:18.765135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:18.825054   58316 cri.go:89] found id: ""
	I0520 11:48:18.825083   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.825094   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:18.825110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:18.825172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:18.861528   58316 cri.go:89] found id: ""
	I0520 11:48:18.861550   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.861560   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:18.861566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:18.861630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:18.899943   58316 cri.go:89] found id: ""
	I0520 11:48:18.899970   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.899980   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:18.899986   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:18.900063   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:18.936465   58316 cri.go:89] found id: ""
	I0520 11:48:18.936497   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.936507   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:18.936514   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:18.936570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:18.970599   58316 cri.go:89] found id: ""
	I0520 11:48:18.970636   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.970645   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:18.970653   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:18.970718   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:19.004507   58316 cri.go:89] found id: ""
	I0520 11:48:19.004538   58316 logs.go:276] 0 containers: []
	W0520 11:48:19.004547   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:19.004558   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:19.004573   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:19.057482   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:19.057521   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:19.072189   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:19.072211   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:19.148331   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:19.148356   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:19.148373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:19.228421   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:19.228456   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:18.055244   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:20.056597   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:19.862408   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:21.862760   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:23.386334   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:25.388116   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:21.771146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:21.786547   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:21.786620   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:21.829816   58316 cri.go:89] found id: ""
	I0520 11:48:21.829846   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.829858   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:21.829865   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:21.829925   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:21.867480   58316 cri.go:89] found id: ""
	I0520 11:48:21.867502   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.867510   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:21.867515   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:21.867571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:21.904737   58316 cri.go:89] found id: ""
	I0520 11:48:21.904760   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.904767   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:21.904772   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:21.904841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:21.940431   58316 cri.go:89] found id: ""
	I0520 11:48:21.940457   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.940465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:21.940482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:21.940543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:21.976054   58316 cri.go:89] found id: ""
	I0520 11:48:21.976087   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.976094   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:21.976100   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:21.976158   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:22.020035   58316 cri.go:89] found id: ""
	I0520 11:48:22.020104   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.020116   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:22.020123   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:22.020189   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:22.060981   58316 cri.go:89] found id: ""
	I0520 11:48:22.061002   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.061010   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:22.061015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:22.061059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:22.102854   58316 cri.go:89] found id: ""
	I0520 11:48:22.102875   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.102882   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:22.102892   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:22.102906   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:22.158901   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:22.158933   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.172825   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:22.172850   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:22.246602   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:22.246625   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:22.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:22.325862   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:22.325899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:24.868449   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:24.882851   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:24.882921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:24.922574   58316 cri.go:89] found id: ""
	I0520 11:48:24.922600   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.922610   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:24.922617   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:24.922677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:24.958786   58316 cri.go:89] found id: ""
	I0520 11:48:24.958817   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.958845   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:24.958853   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:24.958906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:24.993346   58316 cri.go:89] found id: ""
	I0520 11:48:24.993375   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.993384   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:24.993391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:24.993450   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:25.026840   58316 cri.go:89] found id: ""
	I0520 11:48:25.026863   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.026870   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:25.026875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:25.026920   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:25.065675   58316 cri.go:89] found id: ""
	I0520 11:48:25.065710   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.065719   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:25.065727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:25.065791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:25.100955   58316 cri.go:89] found id: ""
	I0520 11:48:25.100986   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.100997   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:25.101004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:25.101062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:25.134516   58316 cri.go:89] found id: ""
	I0520 11:48:25.134544   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.134554   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:25.134562   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:25.134626   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:25.174455   58316 cri.go:89] found id: ""
	I0520 11:48:25.174482   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.174491   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:25.174501   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:25.174527   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:25.245557   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:25.245579   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:25.245592   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:25.324197   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:25.324228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:25.362963   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:25.362995   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:25.420778   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:25.420807   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.555469   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:24.556030   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:24.362744   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:26.865098   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:27.884029   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.885238   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:27.935757   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:27.949787   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:27.949873   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:27.985142   58316 cri.go:89] found id: ""
	I0520 11:48:27.985174   58316 logs.go:276] 0 containers: []
	W0520 11:48:27.985185   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:27.985191   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:27.985251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:28.020044   58316 cri.go:89] found id: ""
	I0520 11:48:28.020074   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.020082   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:28.020088   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:28.020135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:28.057075   58316 cri.go:89] found id: ""
	I0520 11:48:28.057114   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.057125   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:28.057136   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:28.057182   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:28.093505   58316 cri.go:89] found id: ""
	I0520 11:48:28.093527   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.093534   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:28.093540   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:28.093596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:28.129402   58316 cri.go:89] found id: ""
	I0520 11:48:28.129431   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.129440   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:28.129447   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:28.129505   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:28.166558   58316 cri.go:89] found id: ""
	I0520 11:48:28.166582   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.166590   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:28.166601   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:28.166649   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:28.206833   58316 cri.go:89] found id: ""
	I0520 11:48:28.206863   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.206873   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:28.206881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:28.206943   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:28.246571   58316 cri.go:89] found id: ""
	I0520 11:48:28.246605   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.246617   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:28.246628   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:28.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:28.297849   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:28.297882   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:28.311578   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:28.311603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:28.387379   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:28.387399   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:28.387410   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:28.466506   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:28.466540   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.007382   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:31.021671   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:31.021751   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:31.059965   58316 cri.go:89] found id: ""
	I0520 11:48:31.059987   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.059996   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:31.060004   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:31.060062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:31.095664   58316 cri.go:89] found id: ""
	I0520 11:48:31.095686   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.095693   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:31.095699   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:31.095749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:31.131510   58316 cri.go:89] found id: ""
	I0520 11:48:31.131531   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.131538   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:31.131544   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:31.131596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:31.166962   58316 cri.go:89] found id: ""
	I0520 11:48:31.167000   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.167011   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:31.167018   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:31.167082   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:31.202831   58316 cri.go:89] found id: ""
	I0520 11:48:31.202856   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.202863   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:31.202868   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:31.202927   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:31.239365   58316 cri.go:89] found id: ""
	I0520 11:48:31.239397   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.239409   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:31.239417   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:31.239477   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:31.276613   58316 cri.go:89] found id: ""
	I0520 11:48:31.276641   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.276651   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:31.276658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:31.276720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:31.312008   58316 cri.go:89] found id: ""
	I0520 11:48:31.312040   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.312047   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:31.312056   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:31.312066   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:31.394355   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:31.394383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.439251   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:31.439277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:31.494313   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:31.494341   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:31.508675   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:31.508698   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:31.576022   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:27.054447   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.056075   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.556385   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.368181   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.862789   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.885337   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:33.885831   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:34.076333   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:34.089640   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:34.089704   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:34.124848   58316 cri.go:89] found id: ""
	I0520 11:48:34.124869   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.124876   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:34.124881   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:34.124948   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:34.159885   58316 cri.go:89] found id: ""
	I0520 11:48:34.159911   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.159920   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:34.159925   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:34.159988   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:34.194029   58316 cri.go:89] found id: ""
	I0520 11:48:34.194051   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.194058   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:34.194063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:34.194135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:34.231412   58316 cri.go:89] found id: ""
	I0520 11:48:34.231435   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.231442   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:34.231448   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:34.231497   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:34.268679   58316 cri.go:89] found id: ""
	I0520 11:48:34.268709   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.268721   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:34.268729   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:34.268795   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:34.313585   58316 cri.go:89] found id: ""
	I0520 11:48:34.313607   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.313613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:34.313618   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:34.313681   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:34.348685   58316 cri.go:89] found id: ""
	I0520 11:48:34.348712   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.348723   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:34.348730   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:34.348788   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:34.393438   58316 cri.go:89] found id: ""
	I0520 11:48:34.393462   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.393471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:34.393480   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:34.393495   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:34.468828   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:34.468859   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:34.468877   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:34.553796   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:34.553827   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:34.594968   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:34.594992   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:34.647814   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:34.647847   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:34.055168   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.056035   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:34.362591   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.863645   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.385081   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:38.884785   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:40.886065   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:37.161573   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:37.176133   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:37.176214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:37.214694   58316 cri.go:89] found id: ""
	I0520 11:48:37.214719   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.214729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:37.214736   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:37.214803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:37.259745   58316 cri.go:89] found id: ""
	I0520 11:48:37.259772   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.259783   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:37.259790   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:37.259852   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:37.299887   58316 cri.go:89] found id: ""
	I0520 11:48:37.299920   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.299931   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:37.299938   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:37.300003   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:37.337428   58316 cri.go:89] found id: ""
	I0520 11:48:37.337456   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.337465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:37.337472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:37.337532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:37.379635   58316 cri.go:89] found id: ""
	I0520 11:48:37.379662   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.379672   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:37.379680   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:37.379739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:37.421071   58316 cri.go:89] found id: ""
	I0520 11:48:37.421104   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.421112   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:37.421118   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:37.421164   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:37.460130   58316 cri.go:89] found id: ""
	I0520 11:48:37.460154   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.460162   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:37.460168   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:37.460225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:37.497028   58316 cri.go:89] found id: ""
	I0520 11:48:37.497053   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.497063   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:37.497074   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:37.497089   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:37.550632   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:37.550659   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:37.566172   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:37.566201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:37.645132   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:37.645150   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:37.645163   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:37.720923   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:37.720966   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:40.260330   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:40.275233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:40.275294   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:40.310450   58316 cri.go:89] found id: ""
	I0520 11:48:40.310479   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.310488   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:40.310494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:40.310543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:40.346171   58316 cri.go:89] found id: ""
	I0520 11:48:40.346200   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.346211   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:40.346217   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:40.346278   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:40.382644   58316 cri.go:89] found id: ""
	I0520 11:48:40.382666   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.382676   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:40.382684   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:40.382739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:40.422950   58316 cri.go:89] found id: ""
	I0520 11:48:40.422974   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.422984   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:40.422990   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:40.423050   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:40.458818   58316 cri.go:89] found id: ""
	I0520 11:48:40.458864   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.458876   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:40.458884   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:40.458935   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:40.496438   58316 cri.go:89] found id: ""
	I0520 11:48:40.496462   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.496471   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:40.496476   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:40.496530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:40.532432   58316 cri.go:89] found id: ""
	I0520 11:48:40.532457   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.532465   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:40.532471   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:40.532530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:40.572628   58316 cri.go:89] found id: ""
	I0520 11:48:40.572652   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.572660   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:40.572667   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:40.572678   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:40.623158   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:40.623189   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:40.636609   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:40.636633   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:40.710836   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:40.710866   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:40.710880   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:40.795386   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:40.795420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:38.556294   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:40.556701   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:39.362479   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:41.861916   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.387356   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:45.884457   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.334999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:43.348481   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:43.348553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:43.385528   58316 cri.go:89] found id: ""
	I0520 11:48:43.385554   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.385564   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:43.385571   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:43.385630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:43.424628   58316 cri.go:89] found id: ""
	I0520 11:48:43.424659   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.424676   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:43.424685   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:43.424746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:43.464043   58316 cri.go:89] found id: ""
	I0520 11:48:43.464075   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.464091   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:43.464097   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:43.464171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:43.501860   58316 cri.go:89] found id: ""
	I0520 11:48:43.501889   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.501899   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:43.501907   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:43.501970   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:43.541881   58316 cri.go:89] found id: ""
	I0520 11:48:43.541910   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.541919   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:43.541926   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:43.541992   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:43.578759   58316 cri.go:89] found id: ""
	I0520 11:48:43.578782   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.578789   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:43.578794   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:43.578842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:43.618012   58316 cri.go:89] found id: ""
	I0520 11:48:43.618042   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.618050   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:43.618058   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:43.618117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.654219   58316 cri.go:89] found id: ""
	I0520 11:48:43.654243   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.654250   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:43.654258   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:43.654273   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:43.667782   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:43.667805   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:43.740111   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:43.740138   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:43.740155   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:43.823936   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:43.823974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:43.865045   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:43.865070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.421300   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:46.434802   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:46.434878   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:46.469361   58316 cri.go:89] found id: ""
	I0520 11:48:46.469392   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.469401   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:46.469407   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:46.469460   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:46.503922   58316 cri.go:89] found id: ""
	I0520 11:48:46.503950   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.503960   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:46.503968   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:46.504025   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:46.542916   58316 cri.go:89] found id: ""
	I0520 11:48:46.542942   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.542950   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:46.542955   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:46.543011   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:46.581532   58316 cri.go:89] found id: ""
	I0520 11:48:46.581550   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.581558   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:46.581563   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:46.581623   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:46.618650   58316 cri.go:89] found id: ""
	I0520 11:48:46.618677   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.618684   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:46.618689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:46.618740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:46.655409   58316 cri.go:89] found id: ""
	I0520 11:48:46.655431   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.655439   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:46.655445   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:46.655491   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:46.690494   58316 cri.go:89] found id: ""
	I0520 11:48:46.690519   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.690526   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:46.690531   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:46.690579   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.056625   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:45.555575   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.864183   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:46.362155   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:47.885261   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:49.886656   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:46.724684   58316 cri.go:89] found id: ""
	I0520 11:48:46.724715   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.724725   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:46.724737   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:46.724752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:46.807899   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:46.807931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:46.850852   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:46.850883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.903474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:46.903502   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:46.917073   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:46.917105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:47.006606   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.507632   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:49.521080   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:49.521160   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:49.558639   58316 cri.go:89] found id: ""
	I0520 11:48:49.558671   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.558678   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:49.558684   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:49.558732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:49.595470   58316 cri.go:89] found id: ""
	I0520 11:48:49.595497   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.595506   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:49.595513   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:49.595573   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:49.630164   58316 cri.go:89] found id: ""
	I0520 11:48:49.630193   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.630204   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:49.630211   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:49.630272   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:49.664835   58316 cri.go:89] found id: ""
	I0520 11:48:49.664863   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.664873   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:49.664882   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:49.664954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:49.700428   58316 cri.go:89] found id: ""
	I0520 11:48:49.700483   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.700497   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:49.700504   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:49.700569   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:49.737184   58316 cri.go:89] found id: ""
	I0520 11:48:49.737216   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.737226   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:49.737234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:49.737303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:49.772577   58316 cri.go:89] found id: ""
	I0520 11:48:49.772601   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.772611   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:49.772617   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:49.772678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:49.806926   58316 cri.go:89] found id: ""
	I0520 11:48:49.806954   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.806966   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:49.806977   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:49.806991   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:49.861208   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:49.861242   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:49.875054   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:49.875084   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:49.949556   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.949582   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:49.949596   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:50.032181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:50.032226   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:48.055288   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:50.057220   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:48.362952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:50.363988   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.863467   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.385482   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:54.386207   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.578486   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:52.592465   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:52.592534   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:52.627795   58316 cri.go:89] found id: ""
	I0520 11:48:52.627828   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.627838   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:52.627845   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:52.627906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:52.663863   58316 cri.go:89] found id: ""
	I0520 11:48:52.663889   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.663896   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:52.663902   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:52.663956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:52.697912   58316 cri.go:89] found id: ""
	I0520 11:48:52.697937   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.697944   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:52.697950   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:52.698010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:52.751379   58316 cri.go:89] found id: ""
	I0520 11:48:52.751401   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.751408   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:52.751413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:52.751473   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:52.786683   58316 cri.go:89] found id: ""
	I0520 11:48:52.786709   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.786716   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:52.786721   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:52.786782   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:52.821720   58316 cri.go:89] found id: ""
	I0520 11:48:52.821747   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.821758   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:52.821765   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:52.821828   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:52.858569   58316 cri.go:89] found id: ""
	I0520 11:48:52.858596   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.858607   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:52.858615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:52.858671   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:52.897524   58316 cri.go:89] found id: ""
	I0520 11:48:52.897551   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.897560   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:52.897569   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:52.897584   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:52.977792   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:52.977816   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:52.977831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:53.064964   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:53.065009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:53.108613   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:53.108649   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:53.164692   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:53.164728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:55.679593   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:55.693505   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:55.693576   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:55.730013   58316 cri.go:89] found id: ""
	I0520 11:48:55.730043   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.730053   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:55.730061   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:55.730121   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:55.769739   58316 cri.go:89] found id: ""
	I0520 11:48:55.769769   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.769780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:55.769787   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:55.769842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:55.809831   58316 cri.go:89] found id: ""
	I0520 11:48:55.809867   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.809878   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:55.809886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:55.809946   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:55.843487   58316 cri.go:89] found id: ""
	I0520 11:48:55.843513   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.843521   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:55.843527   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:55.843586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:55.883327   58316 cri.go:89] found id: ""
	I0520 11:48:55.883352   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.883362   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:55.883372   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:55.883430   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:55.921507   58316 cri.go:89] found id: ""
	I0520 11:48:55.921534   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.921544   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:55.921552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:55.921612   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:55.957042   58316 cri.go:89] found id: ""
	I0520 11:48:55.957066   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.957073   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:55.957079   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:55.957141   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:55.990871   58316 cri.go:89] found id: ""
	I0520 11:48:55.990899   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.990907   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:55.990916   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:55.990931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:56.041345   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:56.041373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:56.055628   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:56.055653   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:56.128845   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:56.128867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:56.128883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:56.218181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:56.218215   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:52.554824   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:54.557133   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:55.362033   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:57.362624   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:56.885083   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:58.885662   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:58.760650   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:58.774991   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:58.775048   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:58.817777   58316 cri.go:89] found id: ""
	I0520 11:48:58.817807   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.817819   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:58.817826   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:58.817885   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:58.855909   58316 cri.go:89] found id: ""
	I0520 11:48:58.855934   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.855942   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:58.855949   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:58.856010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:58.893369   58316 cri.go:89] found id: ""
	I0520 11:48:58.893396   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.893405   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:58.893413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:58.893475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:58.929975   58316 cri.go:89] found id: ""
	I0520 11:48:58.930001   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.930010   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:58.930017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:58.930067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:58.963865   58316 cri.go:89] found id: ""
	I0520 11:48:58.963890   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.963898   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:58.963904   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:58.963954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:59.000604   58316 cri.go:89] found id: ""
	I0520 11:48:59.000624   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.000631   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:59.000637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:59.000685   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:59.037408   58316 cri.go:89] found id: ""
	I0520 11:48:59.037433   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.037442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:59.037452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:59.037512   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:59.073087   58316 cri.go:89] found id: ""
	I0520 11:48:59.073109   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.073118   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:59.073126   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:59.073142   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:59.124885   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:59.124916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:59.154020   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:59.154047   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:59.277149   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:59.277199   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:59.277216   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:59.363126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:59.363157   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:57.055207   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:59.055409   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.554795   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:59.362869   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.363107   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.385471   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:03.386313   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:05.386387   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.906903   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:01.923346   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:01.923407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:01.961612   58316 cri.go:89] found id: ""
	I0520 11:49:01.961635   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.961643   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:01.961648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:01.961702   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:01.997803   58316 cri.go:89] found id: ""
	I0520 11:49:01.997830   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.997839   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:01.997844   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:01.997894   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:02.032142   58316 cri.go:89] found id: ""
	I0520 11:49:02.032169   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.032177   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:02.032182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:02.032226   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:02.070641   58316 cri.go:89] found id: ""
	I0520 11:49:02.070664   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.070671   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:02.070676   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:02.070729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:02.107270   58316 cri.go:89] found id: ""
	I0520 11:49:02.107296   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.107302   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:02.107308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:02.107356   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:02.144676   58316 cri.go:89] found id: ""
	I0520 11:49:02.144702   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.144709   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:02.144717   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:02.144798   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:02.184382   58316 cri.go:89] found id: ""
	I0520 11:49:02.184403   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.184410   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:02.184416   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:02.184466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:02.223110   58316 cri.go:89] found id: ""
	I0520 11:49:02.223140   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.223152   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:02.223164   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:02.223181   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:02.236272   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:02.236305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:02.312046   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:02.312065   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:02.312076   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:02.397284   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:02.397313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:02.438720   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:02.438755   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:04.988565   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:05.005842   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:05.005906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:05.071468   58316 cri.go:89] found id: ""
	I0520 11:49:05.071496   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.071505   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:05.071515   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:05.071574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:05.125767   58316 cri.go:89] found id: ""
	I0520 11:49:05.125793   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.125801   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:05.125806   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:05.125868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:05.162651   58316 cri.go:89] found id: ""
	I0520 11:49:05.162676   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.162684   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:05.162689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:05.162746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:05.205140   58316 cri.go:89] found id: ""
	I0520 11:49:05.205168   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.205176   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:05.205182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:05.205245   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:05.245187   58316 cri.go:89] found id: ""
	I0520 11:49:05.245209   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.245216   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:05.245221   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:05.245280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:05.287698   58316 cri.go:89] found id: ""
	I0520 11:49:05.287728   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.287739   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:05.287746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:05.287809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:05.325607   58316 cri.go:89] found id: ""
	I0520 11:49:05.325637   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.325649   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:05.325655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:05.325716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:05.360662   58316 cri.go:89] found id: ""
	I0520 11:49:05.360690   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.360700   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:05.360711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:05.360725   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:05.402342   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:05.402376   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:05.454403   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:05.454436   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:05.468882   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:05.468912   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:05.546520   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:05.546546   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:05.546567   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:03.555624   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:06.055651   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:03.862582   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:05.862917   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:07.863213   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:07.886494   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.384808   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:08.128785   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:08.144963   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:08.145053   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:08.190121   58316 cri.go:89] found id: ""
	I0520 11:49:08.190154   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.190165   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:08.190174   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:08.190253   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:08.227310   58316 cri.go:89] found id: ""
	I0520 11:49:08.227334   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.227341   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:08.227346   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:08.227390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:08.262606   58316 cri.go:89] found id: ""
	I0520 11:49:08.262631   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.262639   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:08.262646   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:08.262706   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:08.297061   58316 cri.go:89] found id: ""
	I0520 11:49:08.297092   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.297103   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:08.297110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:08.297170   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:08.337356   58316 cri.go:89] found id: ""
	I0520 11:49:08.337394   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.337410   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:08.337418   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:08.337481   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:08.372576   58316 cri.go:89] found id: ""
	I0520 11:49:08.372603   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.372613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:08.372620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:08.372677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:08.409394   58316 cri.go:89] found id: ""
	I0520 11:49:08.409427   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.409442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:08.409449   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:08.409562   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:08.445119   58316 cri.go:89] found id: ""
	I0520 11:49:08.445145   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.445153   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:08.445161   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:08.445171   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:08.457873   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:08.457899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:08.530873   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:08.530902   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:08.530920   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.609655   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:08.609695   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:08.656568   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:08.656601   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.209318   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:11.223883   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:11.223958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:11.260006   58316 cri.go:89] found id: ""
	I0520 11:49:11.260047   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.260060   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:11.260068   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:11.260129   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:11.296348   58316 cri.go:89] found id: ""
	I0520 11:49:11.296375   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.296385   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:11.296392   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:11.296452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:11.336546   58316 cri.go:89] found id: ""
	I0520 11:49:11.336573   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.336583   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:11.336590   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:11.336652   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:11.373216   58316 cri.go:89] found id: ""
	I0520 11:49:11.373244   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.373254   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:11.373261   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:11.373320   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:11.411555   58316 cri.go:89] found id: ""
	I0520 11:49:11.411586   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.411596   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:11.411604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:11.411663   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:11.446390   58316 cri.go:89] found id: ""
	I0520 11:49:11.446419   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.446428   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:11.446433   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:11.446489   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:11.481584   58316 cri.go:89] found id: ""
	I0520 11:49:11.481613   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.481624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:11.481637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:11.481701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:11.518092   58316 cri.go:89] found id: ""
	I0520 11:49:11.518121   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.518131   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:11.518143   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:11.518158   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.569824   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:11.569859   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:11.583600   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:11.583620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:11.663761   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:11.663799   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:11.663817   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.055875   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.056882   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.362663   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:12.863246   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:12.388256   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:14.887822   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:11.743430   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:11.743462   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:14.289940   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:14.307574   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:14.307645   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:14.349602   58316 cri.go:89] found id: ""
	I0520 11:49:14.349628   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.349637   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:14.349644   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:14.349701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:14.391484   58316 cri.go:89] found id: ""
	I0520 11:49:14.391508   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.391516   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:14.391521   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:14.391581   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:14.437859   58316 cri.go:89] found id: ""
	I0520 11:49:14.437884   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.437894   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:14.437902   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:14.437965   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:14.474353   58316 cri.go:89] found id: ""
	I0520 11:49:14.474379   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.474387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:14.474392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:14.474439   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:14.517430   58316 cri.go:89] found id: ""
	I0520 11:49:14.517459   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.517466   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:14.517472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:14.517523   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:14.552166   58316 cri.go:89] found id: ""
	I0520 11:49:14.552196   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.552207   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:14.552220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:14.552280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:14.589295   58316 cri.go:89] found id: ""
	I0520 11:49:14.589322   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.589330   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:14.589336   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:14.589396   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:14.626221   58316 cri.go:89] found id: ""
	I0520 11:49:14.626244   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.626251   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:14.626259   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:14.626271   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:14.681867   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:14.681899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:14.695328   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:14.695355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:14.773077   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:14.773102   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:14.773118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:14.857344   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:14.857378   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:12.557989   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:15.056569   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:14.864153   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:17.362222   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:16.888363   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.385523   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:17.402259   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:17.415895   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:17.415956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:17.458403   58316 cri.go:89] found id: ""
	I0520 11:49:17.458432   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.458443   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:17.458450   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:17.458511   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:17.495603   58316 cri.go:89] found id: ""
	I0520 11:49:17.495633   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.495650   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:17.495657   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:17.495716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:17.535416   58316 cri.go:89] found id: ""
	I0520 11:49:17.535440   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.535448   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:17.535452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:17.535515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:17.574842   58316 cri.go:89] found id: ""
	I0520 11:49:17.574868   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.574875   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:17.574880   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:17.574933   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:17.608744   58316 cri.go:89] found id: ""
	I0520 11:49:17.608773   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.608783   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:17.608791   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:17.608864   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:17.645101   58316 cri.go:89] found id: ""
	I0520 11:49:17.645126   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.645134   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:17.645140   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:17.645199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:17.682620   58316 cri.go:89] found id: ""
	I0520 11:49:17.682651   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.682661   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:17.682669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:17.682725   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:17.718392   58316 cri.go:89] found id: ""
	I0520 11:49:17.718414   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.718422   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:17.718430   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:17.718441   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:17.772103   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:17.772138   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:17.786071   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:17.786102   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:17.859596   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:17.859617   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:17.859634   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:17.938723   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:17.938754   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:20.480539   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:20.493677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:20.493745   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:20.528213   58316 cri.go:89] found id: ""
	I0520 11:49:20.528241   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.528251   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:20.528258   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:20.528344   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:20.564821   58316 cri.go:89] found id: ""
	I0520 11:49:20.564844   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.564853   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:20.564860   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:20.564921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:20.599015   58316 cri.go:89] found id: ""
	I0520 11:49:20.599051   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.599063   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:20.599070   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:20.599125   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:20.637115   58316 cri.go:89] found id: ""
	I0520 11:49:20.637160   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.637169   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:20.637175   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:20.637221   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:20.672699   58316 cri.go:89] found id: ""
	I0520 11:49:20.672728   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.672738   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:20.672745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:20.672809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:20.709278   58316 cri.go:89] found id: ""
	I0520 11:49:20.709309   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.709321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:20.709327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:20.709394   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:20.743159   58316 cri.go:89] found id: ""
	I0520 11:49:20.743182   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.743188   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:20.743194   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:20.743239   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:20.780848   58316 cri.go:89] found id: ""
	I0520 11:49:20.780878   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.780890   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:20.780902   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:20.780917   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:20.834178   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:20.834206   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:20.848026   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:20.848049   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:20.926067   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:20.926088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:20.926103   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:21.003153   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:21.003184   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:17.056820   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.556351   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.362408   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:21.861822   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:21.386338   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.390122   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:25.885748   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.545830   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:23.560697   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:23.560781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:23.595691   58316 cri.go:89] found id: ""
	I0520 11:49:23.595719   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.595729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:23.595737   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:23.595805   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:23.634526   58316 cri.go:89] found id: ""
	I0520 11:49:23.634555   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.634565   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:23.634573   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:23.634631   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:23.670101   58316 cri.go:89] found id: ""
	I0520 11:49:23.670144   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.670155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:23.670169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:23.670228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:23.704352   58316 cri.go:89] found id: ""
	I0520 11:49:23.704376   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.704384   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:23.704389   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:23.704435   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:23.745633   58316 cri.go:89] found id: ""
	I0520 11:49:23.745657   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.745664   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:23.745669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:23.745729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:23.784631   58316 cri.go:89] found id: ""
	I0520 11:49:23.784659   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.784671   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:23.784678   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:23.784742   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:23.819884   58316 cri.go:89] found id: ""
	I0520 11:49:23.819911   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.819918   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:23.819924   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:23.819976   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:23.857396   58316 cri.go:89] found id: ""
	I0520 11:49:23.857427   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.857436   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:23.857444   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:23.857455   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:23.931405   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:23.931428   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:23.931445   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:24.004717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:24.004752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:24.047465   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:24.047499   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:24.098918   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:24.098950   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:26.613820   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:26.627313   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:26.627386   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:26.661418   58316 cri.go:89] found id: ""
	I0520 11:49:26.661450   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.661460   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:26.661467   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:26.661532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:22.057526   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:24.554983   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.864449   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:26.363217   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:28.384733   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:30.884703   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:26.697938   58316 cri.go:89] found id: ""
	I0520 11:49:26.699437   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.699450   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:26.699456   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:26.699506   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:26.735017   58316 cri.go:89] found id: ""
	I0520 11:49:26.735053   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.735062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:26.735066   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:26.735115   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:26.771318   58316 cri.go:89] found id: ""
	I0520 11:49:26.771345   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.771355   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:26.771360   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:26.771410   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:26.806365   58316 cri.go:89] found id: ""
	I0520 11:49:26.806396   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.806408   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:26.806415   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:26.806468   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:26.844289   58316 cri.go:89] found id: ""
	I0520 11:49:26.844313   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.844321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:26.844327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:26.844373   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:26.883496   58316 cri.go:89] found id: ""
	I0520 11:49:26.883523   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.883534   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:26.883541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:26.883600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:26.918427   58316 cri.go:89] found id: ""
	I0520 11:49:26.918451   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.918458   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:26.918465   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:26.918477   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:26.986423   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:26.986444   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:26.986457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:27.066909   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:27.066938   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.110874   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:27.110911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:27.167481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:27.167520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:29.682559   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:29.695460   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:29.695520   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:29.731189   58316 cri.go:89] found id: ""
	I0520 11:49:29.731218   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.731230   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:29.731238   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:29.731297   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:29.766067   58316 cri.go:89] found id: ""
	I0520 11:49:29.766109   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.766120   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:29.766131   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:29.766200   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:29.800840   58316 cri.go:89] found id: ""
	I0520 11:49:29.800868   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.800879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:29.800886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:29.800959   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:29.838971   58316 cri.go:89] found id: ""
	I0520 11:49:29.838999   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.839009   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:29.839017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:29.839078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:29.873954   58316 cri.go:89] found id: ""
	I0520 11:49:29.873977   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.873987   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:29.873994   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:29.874051   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:29.912574   58316 cri.go:89] found id: ""
	I0520 11:49:29.912602   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.912612   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:29.912620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:29.912687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:29.945949   58316 cri.go:89] found id: ""
	I0520 11:49:29.945980   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.945989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:29.945995   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:29.946043   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:29.979804   58316 cri.go:89] found id: ""
	I0520 11:49:29.979830   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.979847   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:29.979856   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:29.979868   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:30.027811   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:30.027844   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:30.043082   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:30.043115   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:30.114796   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:30.114827   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:30.114842   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:30.196523   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:30.196561   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.056582   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:29.057033   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:31.554824   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:28.862736   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:31.362469   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:32.885757   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.385655   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:32.737682   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:32.750903   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:32.750979   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:32.786874   58316 cri.go:89] found id: ""
	I0520 11:49:32.786897   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.786907   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:32.786915   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:32.786968   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:32.821758   58316 cri.go:89] found id: ""
	I0520 11:49:32.821783   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.821794   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:32.821800   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:32.821855   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:32.857680   58316 cri.go:89] found id: ""
	I0520 11:49:32.857700   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.857707   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:32.857712   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:32.857756   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:32.897970   58316 cri.go:89] found id: ""
	I0520 11:49:32.897990   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.897999   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:32.898004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:32.898059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:32.932971   58316 cri.go:89] found id: ""
	I0520 11:49:32.932999   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.933007   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:32.933015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:32.933067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:32.968103   58316 cri.go:89] found id: ""
	I0520 11:49:32.968132   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.968140   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:32.968147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:32.968199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:33.006101   58316 cri.go:89] found id: ""
	I0520 11:49:33.006125   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.006132   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:33.006137   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:33.006184   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:33.043611   58316 cri.go:89] found id: ""
	I0520 11:49:33.043639   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.043650   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:33.043659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:33.043677   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:33.119735   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.119758   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:33.119773   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:33.199361   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:33.199408   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:33.263421   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:33.263452   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:33.315644   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:33.315674   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:35.831034   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:35.845272   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:35.845343   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:35.882951   58316 cri.go:89] found id: ""
	I0520 11:49:35.882980   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.882988   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:35.882994   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:35.883042   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:35.919528   58316 cri.go:89] found id: ""
	I0520 11:49:35.919555   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.919564   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:35.919569   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:35.919618   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:35.957880   58316 cri.go:89] found id: ""
	I0520 11:49:35.957904   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.957912   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:35.957918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:35.957964   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:35.995267   58316 cri.go:89] found id: ""
	I0520 11:49:35.995291   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.995299   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:35.995308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:35.995367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:36.032979   58316 cri.go:89] found id: ""
	I0520 11:49:36.033011   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.033022   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:36.033030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:36.033092   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:36.071270   58316 cri.go:89] found id: ""
	I0520 11:49:36.071293   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.071303   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:36.071311   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:36.071434   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:36.105983   58316 cri.go:89] found id: ""
	I0520 11:49:36.106014   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.106024   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:36.106032   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:36.106094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:36.142978   58316 cri.go:89] found id: ""
	I0520 11:49:36.143006   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.143026   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:36.143037   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:36.143061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:36.227019   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:36.227055   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:36.269608   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:36.269646   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:36.321974   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:36.322009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:36.339332   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:36.339361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:36.417118   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.555033   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.555749   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:33.363377   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.863335   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:37.885668   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.385866   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:38.918143   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:38.933253   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:38.933325   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:38.988894   58316 cri.go:89] found id: ""
	I0520 11:49:38.988940   58316 logs.go:276] 0 containers: []
	W0520 11:49:38.988952   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:38.988961   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:38.989030   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:39.025902   58316 cri.go:89] found id: ""
	I0520 11:49:39.025928   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.025938   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:39.025945   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:39.026007   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:39.061920   58316 cri.go:89] found id: ""
	I0520 11:49:39.061945   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.061954   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:39.061961   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:39.062018   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:39.098125   58316 cri.go:89] found id: ""
	I0520 11:49:39.098152   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.098162   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:39.098169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:39.098229   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:39.139286   58316 cri.go:89] found id: ""
	I0520 11:49:39.139318   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.139328   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:39.139335   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:39.139404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:39.174437   58316 cri.go:89] found id: ""
	I0520 11:49:39.174468   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.174477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:39.174485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:39.174551   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:39.216176   58316 cri.go:89] found id: ""
	I0520 11:49:39.216211   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.216221   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:39.216228   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:39.216290   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:39.254213   58316 cri.go:89] found id: ""
	I0520 11:49:39.254235   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.254243   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:39.254252   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:39.254263   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:39.334325   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:39.334350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:39.334367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:39.411799   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:39.411830   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:39.455093   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:39.455118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:39.507557   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:39.507594   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:37.559287   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.055915   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:38.362779   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.862212   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.862459   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.385950   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:44.387966   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.021921   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:42.035757   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:42.035829   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:42.072072   58316 cri.go:89] found id: ""
	I0520 11:49:42.072103   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.072112   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:42.072118   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:42.072180   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:42.108038   58316 cri.go:89] found id: ""
	I0520 11:49:42.108061   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.108068   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:42.108074   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:42.108130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:42.146464   58316 cri.go:89] found id: ""
	I0520 11:49:42.146492   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.146503   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:42.146510   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:42.146578   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:42.182566   58316 cri.go:89] found id: ""
	I0520 11:49:42.182598   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.182608   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:42.182615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:42.182688   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:42.220159   58316 cri.go:89] found id: ""
	I0520 11:49:42.220191   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.220201   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:42.220209   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:42.220268   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:42.255952   58316 cri.go:89] found id: ""
	I0520 11:49:42.255975   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.255982   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:42.255988   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:42.256036   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:42.294347   58316 cri.go:89] found id: ""
	I0520 11:49:42.294376   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.294387   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:42.294394   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:42.294456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:42.332019   58316 cri.go:89] found id: ""
	I0520 11:49:42.332043   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.332050   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:42.332058   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:42.332074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.408479   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:42.408520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:42.445579   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:42.445620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:42.496278   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:42.496312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:42.510258   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:42.510292   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:42.582179   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.082674   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:45.096690   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:45.096749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:45.132031   58316 cri.go:89] found id: ""
	I0520 11:49:45.132054   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.132068   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:45.132074   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:45.132126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:45.167575   58316 cri.go:89] found id: ""
	I0520 11:49:45.167601   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.167612   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:45.167619   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:45.167679   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:45.202021   58316 cri.go:89] found id: ""
	I0520 11:49:45.202047   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.202054   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:45.202059   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:45.202113   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:45.236630   58316 cri.go:89] found id: ""
	I0520 11:49:45.236659   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.236669   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:45.236677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:45.236740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:45.272509   58316 cri.go:89] found id: ""
	I0520 11:49:45.272536   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.272543   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:45.272549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:45.272606   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:45.308221   58316 cri.go:89] found id: ""
	I0520 11:49:45.308242   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.308250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:45.308256   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:45.308298   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:45.344202   58316 cri.go:89] found id: ""
	I0520 11:49:45.344226   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.344233   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:45.344239   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:45.344299   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:45.392276   58316 cri.go:89] found id: ""
	I0520 11:49:45.392304   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.392315   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:45.392325   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:45.392336   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:45.451325   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:45.451355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:45.509835   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:45.509867   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:45.523790   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:45.523815   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:45.599837   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.599867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:45.599886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.056415   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:44.555806   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:45.363614   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:47.863671   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:46.884916   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:48.885117   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:48.175148   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:48.188998   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:48.189102   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:48.225492   58316 cri.go:89] found id: ""
	I0520 11:49:48.225528   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.225538   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:48.225548   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:48.225611   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:48.259197   58316 cri.go:89] found id: ""
	I0520 11:49:48.259221   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.259231   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:48.259237   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:48.259296   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:48.298760   58316 cri.go:89] found id: ""
	I0520 11:49:48.298786   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.298796   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:48.298803   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:48.298876   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:48.339188   58316 cri.go:89] found id: ""
	I0520 11:49:48.339217   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.339227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:48.339234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:48.339302   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:48.378659   58316 cri.go:89] found id: ""
	I0520 11:49:48.378689   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.378699   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:48.378706   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:48.378763   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:48.417637   58316 cri.go:89] found id: ""
	I0520 11:49:48.417664   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.417674   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:48.417681   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:48.417757   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:48.461412   58316 cri.go:89] found id: ""
	I0520 11:49:48.461439   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.461450   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:48.461457   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:48.461521   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:48.502919   58316 cri.go:89] found id: ""
	I0520 11:49:48.502942   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.502949   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:48.502958   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:48.502974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:48.557007   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:48.557037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:48.571457   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:48.571485   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:48.642339   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:48.642362   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:48.642379   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:48.720711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:48.720750   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:51.262685   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:51.276276   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:51.276352   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:51.313139   58316 cri.go:89] found id: ""
	I0520 11:49:51.313169   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.313178   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:51.313184   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:51.313233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:51.349742   58316 cri.go:89] found id: ""
	I0520 11:49:51.349771   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.349781   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:51.349789   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:51.349847   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:51.388819   58316 cri.go:89] found id: ""
	I0520 11:49:51.388856   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.388868   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:51.388875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:51.388958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:51.424734   58316 cri.go:89] found id: ""
	I0520 11:49:51.424769   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.424779   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:51.424789   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:51.424867   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:51.464135   58316 cri.go:89] found id: ""
	I0520 11:49:51.464163   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.464172   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:51.464178   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:51.464241   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:51.505948   58316 cri.go:89] found id: ""
	I0520 11:49:51.505974   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.505984   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:51.505992   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:51.506057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:51.542350   58316 cri.go:89] found id: ""
	I0520 11:49:51.542378   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.542389   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:51.542397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:51.542461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:51.577991   58316 cri.go:89] found id: ""
	I0520 11:49:51.578022   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.578031   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:51.578043   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:51.578058   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:51.630392   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:51.630427   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:51.645173   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:51.645198   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:49:47.056668   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:49.554453   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:51.555691   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:50.363307   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:52.862785   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:51.386280   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:53.386860   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:55.886047   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	W0520 11:49:51.715271   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:51.715297   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:51.715312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:51.798792   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:51.798831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.343631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:54.358716   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:54.358791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:54.394117   58316 cri.go:89] found id: ""
	I0520 11:49:54.394145   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.394154   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:54.394162   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:54.394219   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:54.435611   58316 cri.go:89] found id: ""
	I0520 11:49:54.435636   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.435644   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:54.435652   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:54.435711   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:54.469438   58316 cri.go:89] found id: ""
	I0520 11:49:54.469464   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.469474   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:54.469482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:54.469544   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:54.508113   58316 cri.go:89] found id: ""
	I0520 11:49:54.508143   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.508154   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:54.508161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:54.508223   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:54.544504   58316 cri.go:89] found id: ""
	I0520 11:49:54.544528   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.544536   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:54.544541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:54.544598   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:54.584691   58316 cri.go:89] found id: ""
	I0520 11:49:54.584725   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.584735   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:54.584743   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:54.584804   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:54.618505   58316 cri.go:89] found id: ""
	I0520 11:49:54.618525   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.618532   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:54.618538   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:54.618589   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:54.652670   58316 cri.go:89] found id: ""
	I0520 11:49:54.652701   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.652711   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:54.652721   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:54.652741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.692782   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:54.692808   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:54.746300   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:54.746340   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:54.760324   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:54.760357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:54.835091   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:54.835118   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:54.835130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:54.055131   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:56.055386   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:54.863545   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:57.363023   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:58.385457   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:00.385971   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:57.422176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:57.437369   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:57.437429   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:57.472083   58316 cri.go:89] found id: ""
	I0520 11:49:57.472119   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.472130   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:57.472137   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:57.472197   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:57.507307   58316 cri.go:89] found id: ""
	I0520 11:49:57.507332   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.507339   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:57.507345   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:57.507405   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:57.548112   58316 cri.go:89] found id: ""
	I0520 11:49:57.548144   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.548155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:57.548162   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:57.548227   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:57.585751   58316 cri.go:89] found id: ""
	I0520 11:49:57.585790   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.585801   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:57.585809   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:57.585879   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:57.626577   58316 cri.go:89] found id: ""
	I0520 11:49:57.626609   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.626620   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:57.626628   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:57.626687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:57.662402   58316 cri.go:89] found id: ""
	I0520 11:49:57.662429   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.662436   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:57.662441   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:57.662503   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:57.695393   58316 cri.go:89] found id: ""
	I0520 11:49:57.695417   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.695424   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:57.695429   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:57.695487   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:57.730011   58316 cri.go:89] found id: ""
	I0520 11:49:57.730039   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.730049   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:57.730059   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:57.730070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:57.807727   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:57.807763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:57.853562   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:57.853595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:57.909322   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:57.909357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:57.923277   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:57.923305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:57.994858   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.495176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:00.508821   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:00.508889   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:00.543001   58316 cri.go:89] found id: ""
	I0520 11:50:00.543028   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.543045   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:00.543057   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:00.543117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:00.582547   58316 cri.go:89] found id: ""
	I0520 11:50:00.582574   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.582590   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:00.582598   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:00.582659   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:00.617962   58316 cri.go:89] found id: ""
	I0520 11:50:00.617993   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.618003   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:00.618011   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:00.618073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:00.655065   58316 cri.go:89] found id: ""
	I0520 11:50:00.655099   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.655110   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:00.655117   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:00.655177   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:00.689497   58316 cri.go:89] found id: ""
	I0520 11:50:00.689521   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.689528   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:00.689533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:00.689586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:00.723710   58316 cri.go:89] found id: ""
	I0520 11:50:00.723743   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.723754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:00.723761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:00.723821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:00.757900   58316 cri.go:89] found id: ""
	I0520 11:50:00.757943   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.757950   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:00.757956   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:00.758014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:00.792196   58316 cri.go:89] found id: ""
	I0520 11:50:00.792223   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.792232   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:00.792242   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:00.792257   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:00.845759   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:00.845791   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:00.860633   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:00.860658   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:00.936030   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.936050   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:00.936061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:01.011562   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:01.011602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:58.554785   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:00.555726   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:59.365132   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:01.863538   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:02.884908   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:05.384186   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:03.554533   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:03.568017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:03.568086   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:03.605583   58316 cri.go:89] found id: ""
	I0520 11:50:03.605610   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.605619   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:03.605626   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:03.605683   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.640875   58316 cri.go:89] found id: ""
	I0520 11:50:03.640905   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.640915   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:03.640939   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:03.641004   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:03.676064   58316 cri.go:89] found id: ""
	I0520 11:50:03.676088   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.676096   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:03.676103   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:03.676155   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:03.710791   58316 cri.go:89] found id: ""
	I0520 11:50:03.710815   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.710823   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:03.710828   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:03.710883   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:03.746348   58316 cri.go:89] found id: ""
	I0520 11:50:03.746375   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.746386   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:03.746392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:03.746451   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:03.782220   58316 cri.go:89] found id: ""
	I0520 11:50:03.782244   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.782253   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:03.782259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:03.782319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:03.819678   58316 cri.go:89] found id: ""
	I0520 11:50:03.819712   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.819724   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:03.819731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:03.819789   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:03.856137   58316 cri.go:89] found id: ""
	I0520 11:50:03.856163   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.856172   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:03.856180   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:03.856191   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:03.914360   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:03.914391   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:03.929401   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:03.929431   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:03.999763   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:03.999882   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:03.999905   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:04.076921   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:04.076971   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:06.623577   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:06.636265   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:06.636329   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:06.670653   58316 cri.go:89] found id: ""
	I0520 11:50:06.670682   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.670692   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:06.670698   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:06.670790   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.054799   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:05.056653   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:03.865619   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:06.362355   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:07.387760   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:09.885607   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:06.703498   58316 cri.go:89] found id: ""
	I0520 11:50:06.703528   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.703537   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:06.703543   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:06.703600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:06.736579   58316 cri.go:89] found id: ""
	I0520 11:50:06.736606   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.736615   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:06.736620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:06.736687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:06.774584   58316 cri.go:89] found id: ""
	I0520 11:50:06.774608   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.774615   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:06.774625   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:06.774682   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:06.810565   58316 cri.go:89] found id: ""
	I0520 11:50:06.810591   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.810599   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:06.810604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:06.810666   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:06.845063   58316 cri.go:89] found id: ""
	I0520 11:50:06.845101   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.845113   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:06.845121   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:06.845208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:06.884560   58316 cri.go:89] found id: ""
	I0520 11:50:06.884585   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.884592   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:06.884597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:06.884650   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:06.923422   58316 cri.go:89] found id: ""
	I0520 11:50:06.923450   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.923461   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:06.923474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:06.923490   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:06.937831   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:06.937856   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:07.008877   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.008897   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:07.008907   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:07.091174   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:07.091208   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:07.128889   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:07.128922   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:09.686881   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:09.700771   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:09.700842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:09.737129   58316 cri.go:89] found id: ""
	I0520 11:50:09.737156   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.737166   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:09.737172   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:09.737233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:09.772130   58316 cri.go:89] found id: ""
	I0520 11:50:09.772164   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.772175   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:09.772182   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:09.772251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:09.811033   58316 cri.go:89] found id: ""
	I0520 11:50:09.811059   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.811066   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:09.811072   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:09.811127   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:09.849809   58316 cri.go:89] found id: ""
	I0520 11:50:09.849833   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.849841   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:09.849849   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:09.849911   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:09.888216   58316 cri.go:89] found id: ""
	I0520 11:50:09.888241   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.888251   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:09.888258   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:09.888319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:09.925704   58316 cri.go:89] found id: ""
	I0520 11:50:09.925733   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.925742   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:09.925749   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:09.925807   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:09.961823   58316 cri.go:89] found id: ""
	I0520 11:50:09.961846   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.961853   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:09.961858   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:09.961913   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:09.998256   58316 cri.go:89] found id: ""
	I0520 11:50:09.998280   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.998288   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:09.998304   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:09.998319   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:10.077998   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:10.078041   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:10.116358   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:10.116383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:10.169638   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:10.169670   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:10.184056   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:10.184082   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:10.256798   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.555902   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:09.558035   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:08.366901   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:10.862545   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:12.862664   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:12.757663   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:12.772331   58316 kubeadm.go:591] duration metric: took 4m3.848225092s to restartPrimaryControlPlane
	W0520 11:50:12.772412   58316 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:50:12.772450   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:50:13.531871   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:50:13.547475   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:50:13.558690   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:50:13.568708   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:50:13.568730   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:50:13.568772   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:50:13.579016   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:50:13.579078   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:50:13.590821   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:50:13.600567   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:50:13.600614   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:50:13.610747   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.619942   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:50:13.619981   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.629831   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:50:13.638940   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:50:13.638991   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:50:13.648962   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:50:13.714817   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:50:13.714873   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:50:13.872737   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:50:13.872919   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:50:13.873055   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:50:14.078057   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:50:14.080202   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:50:14.080339   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:50:14.080431   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:50:14.080563   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:50:14.080660   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:50:14.080763   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:50:14.080842   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:50:14.081017   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:50:14.081620   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:50:14.081988   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:50:14.082494   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:50:14.082552   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:50:14.082602   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:50:14.157464   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:50:14.556498   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:50:14.754304   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:50:15.061126   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:50:15.079978   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:50:15.081069   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:50:15.081248   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:50:15.222910   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:50:12.385235   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.886286   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:15.225078   58316 out.go:204]   - Booting up control plane ...
	I0520 11:50:15.225186   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:50:15.232621   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:50:15.233678   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:50:15.234394   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:50:15.236632   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:50:12.055821   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.556030   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.865340   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.362603   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.388804   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.885030   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.056232   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.555242   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.555339   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.363446   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.861989   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.886042   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:24.384353   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:23.555974   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:26.054899   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:23.862506   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:25.862952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:26.384558   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.384818   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.387241   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.055327   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.055628   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.365187   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.862199   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.863892   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.885393   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.384549   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.555087   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.055509   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.362103   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.362430   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.386206   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.386258   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.055828   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.554574   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.554821   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.862750   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.862973   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.885277   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:43.886162   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:43.555316   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:45.556417   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:44.364091   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:46.364286   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:46.385011   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.385754   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:50.387956   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.055237   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:50.555567   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.362404   58398 pod_ready.go:81] duration metric: took 4m0.006145654s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	E0520 11:50:48.362424   58398 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:50:48.362431   58398 pod_ready.go:38] duration metric: took 4m3.605442277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:50:48.362444   58398 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:50:48.362469   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:48.362528   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:48.422343   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:48.422361   58398 cri.go:89] found id: ""
	I0520 11:50:48.422369   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:48.422428   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.427448   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:48.427512   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:48.477327   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:48.477353   58398 cri.go:89] found id: ""
	I0520 11:50:48.477361   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:48.477418   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.482113   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:48.482161   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:48.525019   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:48.525039   58398 cri.go:89] found id: ""
	I0520 11:50:48.525046   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:48.525087   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.529544   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:48.529616   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:48.571031   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:48.571055   58398 cri.go:89] found id: ""
	I0520 11:50:48.571065   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:48.571117   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.575601   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:48.575653   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:48.617086   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:48.617111   58398 cri.go:89] found id: ""
	I0520 11:50:48.617120   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:48.617174   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.623512   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:48.623585   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:48.660141   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:48.660170   58398 cri.go:89] found id: ""
	I0520 11:50:48.660179   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:48.660236   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.665578   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:48.665648   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:48.704366   58398 cri.go:89] found id: ""
	I0520 11:50:48.704398   58398 logs.go:276] 0 containers: []
	W0520 11:50:48.704408   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:48.704418   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:48.704482   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:48.745897   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:48.745925   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:48.745931   58398 cri.go:89] found id: ""
	I0520 11:50:48.745940   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:48.745997   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.750982   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.755064   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:48.755084   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:48.804692   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:48.804720   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:48.862735   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:48.862765   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:48.914854   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:48.914885   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:48.930449   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:48.930473   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:48.980371   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:48.980398   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:49.019847   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:49.019871   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:49.066103   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:49.066131   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:49.102912   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:49.102938   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:49.148380   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:49.148403   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:49.184436   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:49.184466   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:49.658226   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:49.658267   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:49.715674   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:49.715724   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:52.360868   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:52.378573   58398 api_server.go:72] duration metric: took 4m14.818629715s to wait for apiserver process to appear ...
	I0520 11:50:52.378595   58398 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:50:52.378634   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:52.378694   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:52.423033   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:52.423057   58398 cri.go:89] found id: ""
	I0520 11:50:52.423067   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:52.423124   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.427361   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:52.427422   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:52.472357   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:52.472383   58398 cri.go:89] found id: ""
	I0520 11:50:52.472393   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:52.472452   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.477472   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:52.477536   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:52.517721   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:52.517742   58398 cri.go:89] found id: ""
	I0520 11:50:52.517750   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:52.517806   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.522347   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:52.522415   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:52.562720   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:52.562747   58398 cri.go:89] found id: ""
	I0520 11:50:52.562757   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:52.562815   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.567322   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:52.567378   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:52.605752   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:52.605779   58398 cri.go:89] found id: ""
	I0520 11:50:52.605787   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:52.605841   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.609932   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:52.609996   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:52.646704   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:52.646734   58398 cri.go:89] found id: ""
	I0520 11:50:52.646745   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:52.646805   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.651422   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:52.651493   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:52.690880   58398 cri.go:89] found id: ""
	I0520 11:50:52.690909   58398 logs.go:276] 0 containers: []
	W0520 11:50:52.690919   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:52.690927   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:52.690997   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:52.733525   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:52.733558   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:52.733562   58398 cri.go:89] found id: ""
	I0520 11:50:52.733568   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:52.733615   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.738004   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.743086   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:52.743108   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:52.795042   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:52.795078   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:52.809356   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:52.809384   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:52.854486   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:52.854517   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:52.904266   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:52.904289   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:52.941257   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:52.941285   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:52.886142   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.385782   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.237878   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:50:55.238408   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:50:55.238664   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:50:53.057541   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.554909   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:52.978583   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:52.978613   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:53.081246   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:53.081276   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:53.124472   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:53.124509   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:53.167027   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:53.167055   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:53.223091   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:53.223125   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:53.261390   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:53.261417   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:53.680820   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:53.680877   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:56.227910   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:50:56.233856   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I0520 11:50:56.234827   58398 api_server.go:141] control plane version: v1.30.1
	I0520 11:50:56.234852   58398 api_server.go:131] duration metric: took 3.856250847s to wait for apiserver health ...
	I0520 11:50:56.234859   58398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:50:56.234880   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:56.234923   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:56.273520   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:56.273539   58398 cri.go:89] found id: ""
	I0520 11:50:56.273545   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:56.273589   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.278121   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:56.278181   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:56.311934   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:56.311958   58398 cri.go:89] found id: ""
	I0520 11:50:56.311966   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:56.312011   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.315972   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:56.316033   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:56.351237   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:56.351259   58398 cri.go:89] found id: ""
	I0520 11:50:56.351268   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:56.351323   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.355480   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:56.355528   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:56.389768   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:56.389782   58398 cri.go:89] found id: ""
	I0520 11:50:56.389789   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:56.389829   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.393815   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:56.393869   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:56.436557   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:56.436582   58398 cri.go:89] found id: ""
	I0520 11:50:56.436592   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:56.436637   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.440866   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:56.440943   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:56.480688   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:56.480734   58398 cri.go:89] found id: ""
	I0520 11:50:56.480744   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:56.480814   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.485880   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:56.485974   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:56.521965   58398 cri.go:89] found id: ""
	I0520 11:50:56.521992   58398 logs.go:276] 0 containers: []
	W0520 11:50:56.521999   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:56.522005   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:56.522063   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:56.567585   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:56.567611   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:56.567617   58398 cri.go:89] found id: ""
	I0520 11:50:56.567627   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:56.567677   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.572972   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.577459   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:56.577484   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:56.636435   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:56.636467   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:56.675690   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:56.675714   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:56.720035   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:56.720065   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:56.734275   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:56.734302   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:56.837237   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:56.837268   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:56.885482   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:56.885517   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:56.922276   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:56.922302   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:56.971119   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:56.971148   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:57.010383   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:57.010414   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:57.044880   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:57.044909   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:57.097906   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:57.097936   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:57.147600   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:57.147626   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:59.998576   58398 system_pods.go:59] 8 kube-system pods found
	I0520 11:50:59.998601   58398 system_pods.go:61] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running
	I0520 11:50:59.998605   58398 system_pods.go:61] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running
	I0520 11:50:59.998609   58398 system_pods.go:61] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running
	I0520 11:50:59.998613   58398 system_pods.go:61] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running
	I0520 11:50:59.998617   58398 system_pods.go:61] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running
	I0520 11:50:59.998621   58398 system_pods.go:61] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running
	I0520 11:50:59.998627   58398 system_pods.go:61] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:50:59.998631   58398 system_pods.go:61] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running
	I0520 11:50:59.998639   58398 system_pods.go:74] duration metric: took 3.763773626s to wait for pod list to return data ...
	I0520 11:50:59.998645   58398 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:51:00.001599   58398 default_sa.go:45] found service account: "default"
	I0520 11:51:00.001619   58398 default_sa.go:55] duration metric: took 2.967907ms for default service account to be created ...
	I0520 11:51:00.001626   58398 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:51:00.007124   58398 system_pods.go:86] 8 kube-system pods found
	I0520 11:51:00.007151   58398 system_pods.go:89] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running
	I0520 11:51:00.007156   58398 system_pods.go:89] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running
	I0520 11:51:00.007161   58398 system_pods.go:89] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running
	I0520 11:51:00.007168   58398 system_pods.go:89] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running
	I0520 11:51:00.007175   58398 system_pods.go:89] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running
	I0520 11:51:00.007182   58398 system_pods.go:89] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running
	I0520 11:51:00.007194   58398 system_pods.go:89] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:00.007205   58398 system_pods.go:89] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running
	I0520 11:51:00.007215   58398 system_pods.go:126] duration metric: took 5.582307ms to wait for k8s-apps to be running ...
	I0520 11:51:00.007226   58398 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:51:00.007281   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:51:00.023780   58398 system_svc.go:56] duration metric: took 16.545966ms WaitForService to wait for kubelet
	I0520 11:51:00.023806   58398 kubeadm.go:576] duration metric: took 4m22.463864802s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:51:00.023830   58398 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:51:00.026757   58398 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:51:00.026775   58398 node_conditions.go:123] node cpu capacity is 2
	I0520 11:51:00.026788   58398 node_conditions.go:105] duration metric: took 2.951465ms to run NodePressure ...
	I0520 11:51:00.026798   58398 start.go:240] waiting for startup goroutines ...
	I0520 11:51:00.026804   58398 start.go:245] waiting for cluster config update ...
	I0520 11:51:00.026814   58398 start.go:254] writing updated cluster config ...
	I0520 11:51:00.027094   58398 ssh_runner.go:195] Run: rm -f paused
	I0520 11:51:00.078104   58398 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:51:00.079923   58398 out.go:177] * Done! kubectl is now configured to use "embed-certs-274976" cluster and "default" namespace by default
	I0520 11:50:57.885337   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:00.385874   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:00.239814   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:00.240101   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:50:57.554979   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:59.555797   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:02.886892   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:05.384747   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:02.060543   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:04.555457   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:06.556619   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:07.386407   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:09.885284   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:10.240955   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:10.241197   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:09.055313   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:11.056015   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:11.886887   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:13.887204   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:15.888591   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:12.055930   58840 pod_ready.go:81] duration metric: took 4m0.00701193s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	E0520 11:51:12.055967   58840 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:51:12.055978   58840 pod_ready.go:38] duration metric: took 4m5.378750595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:51:12.055998   58840 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:51:12.056034   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:12.056101   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:12.115475   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:12.115503   58840 cri.go:89] found id: ""
	I0520 11:51:12.115513   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:12.115568   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.120850   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:12.120940   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:12.161022   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:12.161055   58840 cri.go:89] found id: ""
	I0520 11:51:12.161062   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:12.161130   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.165525   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:12.165586   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:12.211862   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:12.211890   58840 cri.go:89] found id: ""
	I0520 11:51:12.211899   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:12.211959   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.217037   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:12.217114   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:12.256985   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:12.257010   58840 cri.go:89] found id: ""
	I0520 11:51:12.257018   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:12.257064   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.261765   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:12.261825   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:12.299745   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:12.299773   58840 cri.go:89] found id: ""
	I0520 11:51:12.299781   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:12.299842   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.307681   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:12.307752   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:12.346729   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:12.346754   58840 cri.go:89] found id: ""
	I0520 11:51:12.346763   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:12.346819   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.352034   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:12.352096   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:12.394387   58840 cri.go:89] found id: ""
	I0520 11:51:12.394405   58840 logs.go:276] 0 containers: []
	W0520 11:51:12.394412   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:12.394416   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:12.394460   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:12.434198   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:12.434224   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:12.434229   58840 cri.go:89] found id: ""
	I0520 11:51:12.434238   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:12.434298   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.439424   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.444096   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:12.444113   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:12.501635   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:12.501668   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:12.516529   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:12.516557   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:12.662690   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:12.662723   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:12.724415   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:12.724455   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:12.767417   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:12.767446   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:12.815739   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:12.815775   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:12.860515   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:12.860543   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:12.900348   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:12.900377   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:12.948489   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:12.948518   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:12.996999   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:12.997030   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:13.034866   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:13.034896   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:13.532828   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:13.532875   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:16.083150   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:51:16.106110   58840 api_server.go:72] duration metric: took 4m16.64390235s to wait for apiserver process to appear ...
	I0520 11:51:16.106155   58840 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:51:16.106198   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:16.106262   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:16.148418   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:16.148450   58840 cri.go:89] found id: ""
	I0520 11:51:16.148459   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:16.148527   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.153400   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:16.153456   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:16.196145   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:16.196167   58840 cri.go:89] found id: ""
	I0520 11:51:16.196175   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:16.196231   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.200792   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:16.200857   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:16.244411   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:16.244435   58840 cri.go:89] found id: ""
	I0520 11:51:16.244446   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:16.244497   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.249637   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:16.249707   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:16.288520   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:16.288542   58840 cri.go:89] found id: ""
	I0520 11:51:16.288550   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:16.288613   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.293307   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:16.293377   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:16.336963   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:16.336988   58840 cri.go:89] found id: ""
	I0520 11:51:16.336998   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:16.337053   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.342155   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:16.342236   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:16.390589   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:16.390610   58840 cri.go:89] found id: ""
	I0520 11:51:16.390617   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:16.390672   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.395319   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:16.395367   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:16.444947   58840 cri.go:89] found id: ""
	I0520 11:51:16.444974   58840 logs.go:276] 0 containers: []
	W0520 11:51:16.444985   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:16.445005   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:16.445069   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:16.485389   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:16.485414   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:16.485418   58840 cri.go:89] found id: ""
	I0520 11:51:16.485425   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:16.485481   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.490036   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.494650   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:16.494678   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:16.510820   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:16.510847   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:16.549583   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:16.549616   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:16.593009   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:16.593036   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:16.644523   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:16.644551   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:16.686326   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:16.686355   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:16.753308   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:16.753339   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:16.799568   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:16.799600   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:16.858422   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:16.858454   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:18.385869   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:20.386467   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:16.983556   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:16.983591   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:17.034851   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:17.034885   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:17.078220   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:17.078247   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:17.120946   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:17.120979   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:20.055408   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:51:20.059619   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 200:
	ok
	I0520 11:51:20.060649   58840 api_server.go:141] control plane version: v1.30.1
	I0520 11:51:20.060671   58840 api_server.go:131] duration metric: took 3.954509172s to wait for apiserver health ...
	I0520 11:51:20.060679   58840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:51:20.060698   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:20.060745   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:20.099666   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:20.099687   58840 cri.go:89] found id: ""
	I0520 11:51:20.099695   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:20.099746   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.104235   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:20.104286   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:20.162860   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:20.162881   58840 cri.go:89] found id: ""
	I0520 11:51:20.162887   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:20.162936   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.168215   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:20.168267   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:20.208871   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:20.208900   58840 cri.go:89] found id: ""
	I0520 11:51:20.208909   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:20.208982   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.213637   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:20.213717   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:20.253194   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:20.253219   58840 cri.go:89] found id: ""
	I0520 11:51:20.253230   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:20.253286   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.258190   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:20.258261   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:20.299693   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:20.299713   58840 cri.go:89] found id: ""
	I0520 11:51:20.299720   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:20.299779   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.308104   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:20.308165   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:20.348954   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:20.348977   58840 cri.go:89] found id: ""
	I0520 11:51:20.348985   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:20.349030   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.353738   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:20.353798   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:20.388466   58840 cri.go:89] found id: ""
	I0520 11:51:20.388486   58840 logs.go:276] 0 containers: []
	W0520 11:51:20.388496   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:20.388504   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:20.388564   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:20.430628   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:20.430653   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:20.430659   58840 cri.go:89] found id: ""
	I0520 11:51:20.430668   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:20.430714   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.435241   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.439312   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:20.439332   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:20.476361   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:20.476388   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:20.514317   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:20.514345   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:20.566557   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:20.566604   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:20.582249   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:20.582279   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:20.639530   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:20.639568   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:20.676804   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:20.676837   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:20.713661   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:20.713691   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:20.757100   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:20.757125   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:20.805824   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:20.805859   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:20.921508   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:20.921540   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:20.968752   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:20.968785   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:21.024773   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:21.024802   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:23.915504   58840 system_pods.go:59] 8 kube-system pods found
	I0520 11:51:23.915536   58840 system_pods.go:61] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running
	I0520 11:51:23.915540   58840 system_pods.go:61] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running
	I0520 11:51:23.915544   58840 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running
	I0520 11:51:23.915549   58840 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running
	I0520 11:51:23.915552   58840 system_pods.go:61] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:51:23.915555   58840 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running
	I0520 11:51:23.915562   58840 system_pods.go:61] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:23.915566   58840 system_pods.go:61] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:51:23.915574   58840 system_pods.go:74] duration metric: took 3.854889521s to wait for pod list to return data ...
	I0520 11:51:23.915582   58840 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:51:23.917651   58840 default_sa.go:45] found service account: "default"
	I0520 11:51:23.917672   58840 default_sa.go:55] duration metric: took 2.084872ms for default service account to be created ...
	I0520 11:51:23.917679   58840 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:51:23.922566   58840 system_pods.go:86] 8 kube-system pods found
	I0520 11:51:23.922589   58840 system_pods.go:89] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running
	I0520 11:51:23.922594   58840 system_pods.go:89] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running
	I0520 11:51:23.922599   58840 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running
	I0520 11:51:23.922604   58840 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running
	I0520 11:51:23.922608   58840 system_pods.go:89] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:51:23.922612   58840 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running
	I0520 11:51:23.922618   58840 system_pods.go:89] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:23.922623   58840 system_pods.go:89] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:51:23.922630   58840 system_pods.go:126] duration metric: took 4.946206ms to wait for k8s-apps to be running ...
	I0520 11:51:23.922638   58840 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:51:23.922679   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:51:23.941348   58840 system_svc.go:56] duration metric: took 18.700356ms WaitForService to wait for kubelet
	I0520 11:51:23.941388   58840 kubeadm.go:576] duration metric: took 4m24.479184818s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:51:23.941414   58840 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:51:23.945372   58840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:51:23.945400   58840 node_conditions.go:123] node cpu capacity is 2
	I0520 11:51:23.945414   58840 node_conditions.go:105] duration metric: took 3.993897ms to run NodePressure ...
	I0520 11:51:23.945428   58840 start.go:240] waiting for startup goroutines ...
	I0520 11:51:23.945437   58840 start.go:245] waiting for cluster config update ...
	I0520 11:51:23.945450   58840 start.go:254] writing updated cluster config ...
	I0520 11:51:23.945739   58840 ssh_runner.go:195] Run: rm -f paused
	I0520 11:51:23.997255   58840 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:51:23.999485   58840 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-668445" cluster and "default" namespace by default
	I0520 11:51:22.386549   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:24.885200   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:26.887178   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:29.384900   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:30.242152   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:30.242431   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:31.386214   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:33.386309   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:35.379767   57519 pod_ready.go:81] duration metric: took 4m0.000946428s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" ...
	E0520 11:51:35.379806   57519 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0520 11:51:35.379826   57519 pod_ready.go:38] duration metric: took 4m4.525892516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:51:35.379854   57519 kubeadm.go:591] duration metric: took 4m13.966188007s to restartPrimaryControlPlane
	W0520 11:51:35.379905   57519 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:51:35.379929   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:07.074203   57519 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.694253213s)
	I0520 11:52:07.074270   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:07.089871   57519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:52:07.100708   57519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:07.111260   57519 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:07.111283   57519 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:07.111333   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:07.121864   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:07.121912   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:07.131426   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:07.140637   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:07.140693   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:07.150305   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:07.160398   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:07.160464   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:07.170170   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:07.179684   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:07.179740   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:07.189036   57519 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:07.242553   57519 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 11:52:07.242606   57519 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:07.396088   57519 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:07.396232   57519 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:07.396391   57519 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:07.612882   57519 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:07.614941   57519 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:07.615033   57519 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:07.615124   57519 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:07.615249   57519 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:07.615353   57519 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:07.615465   57519 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:07.615543   57519 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:07.615641   57519 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:07.615726   57519 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:07.615815   57519 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:07.615915   57519 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:07.615970   57519 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:07.616047   57519 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:07.744190   57519 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:08.012140   57519 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 11:52:08.080751   57519 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:08.369179   57519 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:08.507095   57519 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:08.507736   57519 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:08.510271   57519 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:10.244270   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:10.244870   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:10.244909   58316 kubeadm.go:309] 
	I0520 11:52:10.245019   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:52:10.245121   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:52:10.245131   58316 kubeadm.go:309] 
	I0520 11:52:10.245213   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:52:10.245293   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:52:10.245550   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:52:10.245566   58316 kubeadm.go:309] 
	I0520 11:52:10.245819   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:52:10.245900   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:52:10.245977   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:52:10.245987   58316 kubeadm.go:309] 
	I0520 11:52:10.246241   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:52:10.246437   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:52:10.246450   58316 kubeadm.go:309] 
	I0520 11:52:10.246702   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:52:10.246915   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:52:10.247106   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:52:10.247366   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:52:10.247396   58316 kubeadm.go:309] 
	I0520 11:52:10.247656   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:10.247823   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:52:10.248110   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0520 11:52:10.248219   58316 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0520 11:52:10.248277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:10.736634   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:10.753534   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:10.766627   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:10.766653   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:10.766706   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:10.778676   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:10.778728   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:10.791073   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:10.801084   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:10.801142   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:10.811127   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.820319   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:10.820389   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.830024   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:10.839789   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:10.839850   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:10.849670   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:10.930808   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:52:10.930892   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:11.097490   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:11.097648   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:11.097813   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:11.296124   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:08.512322   57519 out.go:204]   - Booting up control plane ...
	I0520 11:52:08.512437   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:08.512530   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:08.512609   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:08.533828   57519 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:08.534843   57519 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:08.534903   57519 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:08.670084   57519 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 11:52:08.670192   57519 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 11:52:09.172058   57519 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.003696ms
	I0520 11:52:09.172181   57519 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 11:52:11.298250   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:11.298369   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:11.298456   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:11.300821   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:11.300961   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:11.301094   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:11.301178   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:11.301283   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:11.301374   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:11.301475   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:11.301589   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:11.301645   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:11.301723   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:11.441179   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:11.617104   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:11.918441   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:12.107514   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:12.132712   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:12.134501   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:12.134577   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:12.307559   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:14.174127   57519 kubeadm.go:309] [api-check] The API server is healthy after 5.002000599s
	I0520 11:52:14.189265   57519 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 11:52:14.707457   57519 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 11:52:14.731273   57519 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 11:52:14.731480   57519 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-814599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 11:52:14.746881   57519 kubeadm.go:309] [bootstrap-token] Using token: 3eyb61.mp2wh3ubkzpzouqm
	I0520 11:52:14.748381   57519 out.go:204]   - Configuring RBAC rules ...
	I0520 11:52:14.748539   57519 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 11:52:14.764426   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 11:52:14.773450   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 11:52:14.778770   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 11:52:14.782819   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 11:52:14.787255   57519 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 11:52:14.900560   57519 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 11:52:15.342284   57519 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 11:52:15.901553   57519 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 11:52:15.904310   57519 kubeadm.go:309] 
	I0520 11:52:15.904380   57519 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 11:52:15.904389   57519 kubeadm.go:309] 
	I0520 11:52:15.904507   57519 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 11:52:15.904545   57519 kubeadm.go:309] 
	I0520 11:52:15.904578   57519 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 11:52:15.904653   57519 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 11:52:15.904713   57519 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 11:52:15.904723   57519 kubeadm.go:309] 
	I0520 11:52:15.904813   57519 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 11:52:15.904824   57519 kubeadm.go:309] 
	I0520 11:52:15.904910   57519 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 11:52:15.904935   57519 kubeadm.go:309] 
	I0520 11:52:15.905012   57519 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 11:52:15.905113   57519 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 11:52:15.905218   57519 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 11:52:15.905239   57519 kubeadm.go:309] 
	I0520 11:52:15.905326   57519 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 11:52:15.905433   57519 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 11:52:15.905443   57519 kubeadm.go:309] 
	I0520 11:52:15.905534   57519 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3eyb61.mp2wh3ubkzpzouqm \
	I0520 11:52:15.905675   57519 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 11:52:15.905725   57519 kubeadm.go:309] 	--control-plane 
	I0520 11:52:15.905738   57519 kubeadm.go:309] 
	I0520 11:52:15.905851   57519 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 11:52:15.905862   57519 kubeadm.go:309] 
	I0520 11:52:15.905989   57519 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3eyb61.mp2wh3ubkzpzouqm \
	I0520 11:52:15.906131   57519 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 11:52:15.907657   57519 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:15.907690   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:52:15.907703   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:52:15.909664   57519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:52:15.911132   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:52:15.927634   57519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:52:15.946135   57519 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:52:15.946282   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:15.946298   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-814599 minikube.k8s.io/updated_at=2024_05_20T11_52_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=no-preload-814599 minikube.k8s.io/primary=true
	I0520 11:52:16.161431   57519 ops.go:34] apiserver oom_adj: -16
	I0520 11:52:16.166400   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:12.309392   58316 out.go:204]   - Booting up control plane ...
	I0520 11:52:12.309529   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:12.321163   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:12.322464   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:12.323504   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:12.326643   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:52:16.666687   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:17.167351   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:17.666800   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:18.167094   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:18.667154   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:19.167261   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:19.666780   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:20.167002   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:20.667078   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:21.167229   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:21.666511   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:22.167275   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:22.666512   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:23.166471   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:23.667393   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:24.167417   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:24.667387   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:25.167046   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:25.666728   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:26.167118   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:26.667450   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:27.166833   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:27.666776   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.167244   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.666741   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.758525   57519 kubeadm.go:1107] duration metric: took 12.81228323s to wait for elevateKubeSystemPrivileges
	W0520 11:52:28.758571   57519 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 11:52:28.758582   57519 kubeadm.go:393] duration metric: took 5m7.403869165s to StartCluster
	I0520 11:52:28.758599   57519 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:28.758679   57519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:52:28.760488   57519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:28.760723   57519 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:52:28.762410   57519 out.go:177] * Verifying Kubernetes components...
	I0520 11:52:28.760783   57519 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:52:28.760966   57519 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:52:28.763493   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:52:28.763499   57519 addons.go:69] Setting storage-provisioner=true in profile "no-preload-814599"
	I0520 11:52:28.763527   57519 addons.go:234] Setting addon storage-provisioner=true in "no-preload-814599"
	I0520 11:52:28.763528   57519 addons.go:69] Setting metrics-server=true in profile "no-preload-814599"
	W0520 11:52:28.763537   57519 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:52:28.763527   57519 addons.go:69] Setting default-storageclass=true in profile "no-preload-814599"
	I0520 11:52:28.763558   57519 addons.go:234] Setting addon metrics-server=true in "no-preload-814599"
	I0520 11:52:28.763562   57519 host.go:66] Checking if "no-preload-814599" exists ...
	W0520 11:52:28.763568   57519 addons.go:243] addon metrics-server should already be in state true
	I0520 11:52:28.763582   57519 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-814599"
	I0520 11:52:28.763589   57519 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:52:28.763934   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.763952   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.763984   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.764022   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.764047   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.764081   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.778949   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0520 11:52:28.778995   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0520 11:52:28.779481   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.779522   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.780023   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.780053   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.780159   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.780180   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.780498   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.780532   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.780680   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.780720   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0520 11:52:28.781084   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.781094   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.781127   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.781605   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.781627   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.781951   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.782608   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.782651   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.784981   57519 addons.go:234] Setting addon default-storageclass=true in "no-preload-814599"
	W0520 11:52:28.785010   57519 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:52:28.785041   57519 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:52:28.785399   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.785423   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.797463   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0520 11:52:28.797488   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I0520 11:52:28.797958   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.798050   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.798571   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.798589   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.798634   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.798647   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.799040   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.799043   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.799206   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.799250   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.802002   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.804188   57519 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:52:28.802940   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.805408   57519 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:52:28.804469   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0520 11:52:28.805363   57519 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:52:28.806586   57519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:52:28.806592   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:52:28.806601   57519 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:52:28.806607   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.806613   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.806965   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.807364   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.807375   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.807765   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.808210   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.808226   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.810837   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.810866   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811278   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.811300   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811477   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.811716   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.811753   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.811769   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811856   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.812027   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.812308   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.812494   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.812643   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.812795   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.824082   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I0520 11:52:28.824580   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.825117   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.825144   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.825492   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.825709   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.827343   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.827545   57519 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:52:28.827561   57519 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:52:28.827574   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.830623   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.831261   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.831287   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.831485   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.831625   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.831783   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.831912   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.984120   57519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:52:29.006910   57519 node_ready.go:35] waiting up to 6m0s for node "no-preload-814599" to be "Ready" ...
	I0520 11:52:29.016171   57519 node_ready.go:49] node "no-preload-814599" has status "Ready":"True"
	I0520 11:52:29.016197   57519 node_ready.go:38] duration metric: took 9.258398ms for node "no-preload-814599" to be "Ready" ...
	I0520 11:52:29.016207   57519 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:52:29.024085   57519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:29.092559   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:52:29.094377   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:52:29.094395   57519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:52:29.112741   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:52:29.133783   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:52:29.133804   57519 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:52:29.202727   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:52:29.202756   57519 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:52:29.347632   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:52:30.143576   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050979242s)
	I0520 11:52:30.143633   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.143645   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.143652   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.030877915s)
	I0520 11:52:30.143694   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.143706   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.143986   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.143960   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.144014   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144026   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.144037   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.144040   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144053   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.144061   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.144068   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.144045   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.144326   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144345   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.146044   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.146057   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.146068   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.182464   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.182492   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.182802   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.182817   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.557632   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.209948791s)
	I0520 11:52:30.557698   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.557715   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.557972   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.557992   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.557996   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.558011   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.558022   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.558252   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.558267   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.558277   57519 addons.go:470] Verifying addon metrics-server=true in "no-preload-814599"
	I0520 11:52:30.558302   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.560471   57519 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0520 11:52:30.561889   57519 addons.go:505] duration metric: took 1.801104362s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0520 11:52:31.030297   57519 pod_ready.go:102] pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace has status "Ready":"False"
	I0520 11:52:31.538984   57519 pod_ready.go:92] pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.539002   57519 pod_ready.go:81] duration metric: took 2.514888376s for pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.539011   57519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.544069   57519 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.544089   57519 pod_ready.go:81] duration metric: took 5.072004ms for pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.544097   57519 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.549299   57519 pod_ready.go:92] pod "etcd-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.549315   57519 pod_ready.go:81] duration metric: took 5.212738ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.549323   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.555210   57519 pod_ready.go:92] pod "kube-apiserver-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.555229   57519 pod_ready.go:81] duration metric: took 5.899536ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.555263   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.560438   57519 pod_ready.go:92] pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.560454   57519 pod_ready.go:81] duration metric: took 5.183454ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.560463   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wkfkc" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.928261   57519 pod_ready.go:92] pod "kube-proxy-wkfkc" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.928280   57519 pod_ready.go:81] duration metric: took 367.811223ms for pod "kube-proxy-wkfkc" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.928290   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:32.328564   57519 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:32.328589   57519 pod_ready.go:81] duration metric: took 400.292676ms for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:32.328598   57519 pod_ready.go:38] duration metric: took 3.31238177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:52:32.328612   57519 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:52:32.328672   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:52:32.346546   57519 api_server.go:72] duration metric: took 3.585793393s to wait for apiserver process to appear ...
	I0520 11:52:32.346575   57519 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:52:32.346598   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:52:32.351109   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:52:32.352044   57519 api_server.go:141] control plane version: v1.30.1
	I0520 11:52:32.352070   57519 api_server.go:131] duration metric: took 5.485989ms to wait for apiserver health ...
	I0520 11:52:32.352080   57519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:52:32.532017   57519 system_pods.go:59] 9 kube-system pods found
	I0520 11:52:32.532048   57519 system_pods.go:61] "coredns-7db6d8ff4d-8qdbk" [98bbe29a-5bd1-4568-ac29-5bf030c38f79] Running
	I0520 11:52:32.532053   57519 system_pods.go:61] "coredns-7db6d8ff4d-kbz7d" [d36efcc0-4f91-4c5d-8781-0bb5f22187ab] Running
	I0520 11:52:32.532056   57519 system_pods.go:61] "etcd-no-preload-814599" [d0685317-5464-4eb1-a81f-eca1a644f86a] Running
	I0520 11:52:32.532060   57519 system_pods.go:61] "kube-apiserver-no-preload-814599" [4a6f40fe-3c8f-4afc-a731-ccb80c5840ed] Running
	I0520 11:52:32.532064   57519 system_pods.go:61] "kube-controller-manager-no-preload-814599" [b6f3ac72-bd6f-421d-85f6-905a362be527] Running
	I0520 11:52:32.532067   57519 system_pods.go:61] "kube-proxy-wkfkc" [679b7143-7bff-47f3-ab69-cd87eec68013] Running
	I0520 11:52:32.532071   57519 system_pods.go:61] "kube-scheduler-no-preload-814599" [0e7d1cb0-2e44-4ae5-b7fd-e68f88f7fc04] Running
	I0520 11:52:32.532077   57519 system_pods.go:61] "metrics-server-569cc877fc-lggvs" [72eeb6ee-27ce-4485-b364-5c9b87283d29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:52:32.532081   57519 system_pods.go:61] "storage-provisioner" [1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b] Running
	I0520 11:52:32.532094   57519 system_pods.go:74] duration metric: took 180.008393ms to wait for pod list to return data ...
	I0520 11:52:32.532103   57519 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:52:32.729233   57519 default_sa.go:45] found service account: "default"
	I0520 11:52:32.729264   57519 default_sa.go:55] duration metric: took 197.154237ms for default service account to be created ...
	I0520 11:52:32.729278   57519 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:52:32.932686   57519 system_pods.go:86] 9 kube-system pods found
	I0520 11:52:32.932717   57519 system_pods.go:89] "coredns-7db6d8ff4d-8qdbk" [98bbe29a-5bd1-4568-ac29-5bf030c38f79] Running
	I0520 11:52:32.932764   57519 system_pods.go:89] "coredns-7db6d8ff4d-kbz7d" [d36efcc0-4f91-4c5d-8781-0bb5f22187ab] Running
	I0520 11:52:32.932777   57519 system_pods.go:89] "etcd-no-preload-814599" [d0685317-5464-4eb1-a81f-eca1a644f86a] Running
	I0520 11:52:32.932786   57519 system_pods.go:89] "kube-apiserver-no-preload-814599" [4a6f40fe-3c8f-4afc-a731-ccb80c5840ed] Running
	I0520 11:52:32.932796   57519 system_pods.go:89] "kube-controller-manager-no-preload-814599" [b6f3ac72-bd6f-421d-85f6-905a362be527] Running
	I0520 11:52:32.932805   57519 system_pods.go:89] "kube-proxy-wkfkc" [679b7143-7bff-47f3-ab69-cd87eec68013] Running
	I0520 11:52:32.932812   57519 system_pods.go:89] "kube-scheduler-no-preload-814599" [0e7d1cb0-2e44-4ae5-b7fd-e68f88f7fc04] Running
	I0520 11:52:32.932824   57519 system_pods.go:89] "metrics-server-569cc877fc-lggvs" [72eeb6ee-27ce-4485-b364-5c9b87283d29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:52:32.932835   57519 system_pods.go:89] "storage-provisioner" [1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b] Running
	I0520 11:52:32.932854   57519 system_pods.go:126] duration metric: took 203.569018ms to wait for k8s-apps to be running ...
	I0520 11:52:32.932867   57519 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:52:32.932945   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:32.948983   57519 system_svc.go:56] duration metric: took 16.10841ms WaitForService to wait for kubelet
	I0520 11:52:32.949014   57519 kubeadm.go:576] duration metric: took 4.188266242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:52:32.949035   57519 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:52:33.128478   57519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:52:33.128503   57519 node_conditions.go:123] node cpu capacity is 2
	I0520 11:52:33.128514   57519 node_conditions.go:105] duration metric: took 179.473758ms to run NodePressure ...
	I0520 11:52:33.128527   57519 start.go:240] waiting for startup goroutines ...
	I0520 11:52:33.128536   57519 start.go:245] waiting for cluster config update ...
	I0520 11:52:33.128548   57519 start.go:254] writing updated cluster config ...
	I0520 11:52:33.128860   57519 ssh_runner.go:195] Run: rm -f paused
	I0520 11:52:33.177360   57519 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:52:33.179409   57519 out.go:177] * Done! kubectl is now configured to use "no-preload-814599" cluster and "default" namespace by default
	I0520 11:52:52.328753   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:52:52.329026   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:52.329268   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:57.329722   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:57.329960   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:07.330334   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:07.330605   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:27.331241   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:27.331509   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330498   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:54:07.330756   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330768   58316 kubeadm.go:309] 
	I0520 11:54:07.330813   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:54:07.330868   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:54:07.330878   58316 kubeadm.go:309] 
	I0520 11:54:07.330924   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:54:07.330980   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:54:07.331142   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:54:07.331178   58316 kubeadm.go:309] 
	I0520 11:54:07.331298   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:54:07.331354   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:54:07.331409   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:54:07.331419   58316 kubeadm.go:309] 
	I0520 11:54:07.331564   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:54:07.331675   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:54:07.331685   58316 kubeadm.go:309] 
	I0520 11:54:07.331842   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:54:07.331971   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:54:07.332096   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:54:07.332190   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:54:07.332204   58316 kubeadm.go:309] 
	I0520 11:54:07.332486   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:54:07.332595   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:54:07.332685   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 11:54:07.332755   58316 kubeadm.go:393] duration metric: took 7m58.459127604s to StartCluster
	I0520 11:54:07.332796   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:54:07.332870   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:54:07.378602   58316 cri.go:89] found id: ""
	I0520 11:54:07.378631   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.378641   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:54:07.378648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:54:07.378714   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:54:07.415695   58316 cri.go:89] found id: ""
	I0520 11:54:07.415724   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.415732   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:54:07.415739   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:54:07.415802   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:54:07.456681   58316 cri.go:89] found id: ""
	I0520 11:54:07.456706   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.456717   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:54:07.456726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:54:07.456781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:54:07.495834   58316 cri.go:89] found id: ""
	I0520 11:54:07.495870   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.495881   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:54:07.495889   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:54:07.495949   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:54:07.533674   58316 cri.go:89] found id: ""
	I0520 11:54:07.533698   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.533705   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:54:07.533710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:54:07.533759   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:54:07.571118   58316 cri.go:89] found id: ""
	I0520 11:54:07.571141   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.571148   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:54:07.571155   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:54:07.571214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:54:07.609066   58316 cri.go:89] found id: ""
	I0520 11:54:07.609091   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.609105   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:54:07.609114   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:54:07.609176   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:54:07.653029   58316 cri.go:89] found id: ""
	I0520 11:54:07.653057   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.653068   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:54:07.653080   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:54:07.653095   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:54:07.707497   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:54:07.707533   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:54:07.728740   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:54:07.728772   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:54:07.809062   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:54:07.809088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:54:07.809104   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:54:07.911717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:54:07.911762   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0520 11:54:07.952747   58316 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0520 11:54:07.952792   58316 out.go:239] * 
	W0520 11:54:07.952855   58316 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.952881   58316 out.go:239] * 
	W0520 11:54:07.954044   58316 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:54:07.957368   58316 out.go:177] 
	W0520 11:54:07.958433   58316 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.958491   58316 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0520 11:54:07.958518   58316 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0520 11:54:07.959662   58316 out.go:177] 
	
	
	==> CRI-O <==
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.045711452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206426045688826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aabd9dcb-6022-4096-856e-6c24bace1839 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.046336892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=343e7246-8f7f-4dc0-98e0-98a869404f13 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.046425614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=343e7246-8f7f-4dc0-98e0-98a869404f13 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.046605744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205647958668257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a985fe79cee321de10b093785894484dea4774ac9b5f355dd06ebe57c17827,PodSandboxId:c58ca9cb975c25d98e7acc285319cf56bc4e55366e2f9b841463232d500fe1d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205627683143010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a6a1506-9e75-423a-adb9-274636a21dbc,},Annotations:map[string]string{io.kubernetes.container.hash: 58a7b55b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5,PodSandboxId:612194bb3c8cc600f93c1dca2e55ad9a08838af182bd42a3bb18e9331e126220,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205624851486132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x92p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241b883a-458a-4b16-99b3-03b91aec4342,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a0dc0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b,PodSandboxId:e75c8115e82907cd6cb78494eb19b9fbf5d87eb0fcbe6338b492117fb5a3dc56,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205617301716263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7ddtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50634b-c
e22-460b-a740-ed81a498ce63,},Annotations:map[string]string{io.kubernetes.container.hash: 80f9d4ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205617133769816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696
-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b,PodSandboxId:af86bdd9111b866df79c4a59bae6ab8fd55cdc56840863a420bcd04e0f2e85c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205612483128514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5027d9df0407cbbdbc0516584af670,},Annotations:map[
string]string{io.kubernetes.container.hash: c7679dfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0,PodSandboxId:e23f1ec5baab95f7e522cabf1bad9a571a492375491479b2721fa3311f9ae442,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205612469143347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb9cc3efa4f38e9f9e5600bc29a6c58,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 88a7cee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2,PodSandboxId:d8a3ed4cefcf516b0231aa18d63f5f2cfde7c20a2f13e9c868287f24db1d8d48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205612410212165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228cccb5a0212efb1b03f4774343
57bd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d,PodSandboxId:f88fbb0be85d45058ceed82bb421d053483d6bb29b599d4c1f953393421ff13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205612354696245,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7208c5b7d40f8aa3c03c1126d6cb91
38,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=343e7246-8f7f-4dc0-98e0-98a869404f13 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.087635529Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18504876-eb02-47dc-a657-160d4937467b name=/runtime.v1.RuntimeService/Version
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.088081371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18504876-eb02-47dc-a657-160d4937467b name=/runtime.v1.RuntimeService/Version
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.089048040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9cfeee28-efec-4a05-98a3-42d1bcdba0c3 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.089462423Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206426089441634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9cfeee28-efec-4a05-98a3-42d1bcdba0c3 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.090063599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=861fdb4b-4fe1-4bdf-bfb5-689ede585009 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.090134129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=861fdb4b-4fe1-4bdf-bfb5-689ede585009 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.090351450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205647958668257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a985fe79cee321de10b093785894484dea4774ac9b5f355dd06ebe57c17827,PodSandboxId:c58ca9cb975c25d98e7acc285319cf56bc4e55366e2f9b841463232d500fe1d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205627683143010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a6a1506-9e75-423a-adb9-274636a21dbc,},Annotations:map[string]string{io.kubernetes.container.hash: 58a7b55b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5,PodSandboxId:612194bb3c8cc600f93c1dca2e55ad9a08838af182bd42a3bb18e9331e126220,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205624851486132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x92p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241b883a-458a-4b16-99b3-03b91aec4342,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a0dc0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b,PodSandboxId:e75c8115e82907cd6cb78494eb19b9fbf5d87eb0fcbe6338b492117fb5a3dc56,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205617301716263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7ddtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50634b-c
e22-460b-a740-ed81a498ce63,},Annotations:map[string]string{io.kubernetes.container.hash: 80f9d4ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205617133769816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696
-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b,PodSandboxId:af86bdd9111b866df79c4a59bae6ab8fd55cdc56840863a420bcd04e0f2e85c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205612483128514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5027d9df0407cbbdbc0516584af670,},Annotations:map[
string]string{io.kubernetes.container.hash: c7679dfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0,PodSandboxId:e23f1ec5baab95f7e522cabf1bad9a571a492375491479b2721fa3311f9ae442,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205612469143347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb9cc3efa4f38e9f9e5600bc29a6c58,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 88a7cee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2,PodSandboxId:d8a3ed4cefcf516b0231aa18d63f5f2cfde7c20a2f13e9c868287f24db1d8d48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205612410212165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228cccb5a0212efb1b03f4774343
57bd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d,PodSandboxId:f88fbb0be85d45058ceed82bb421d053483d6bb29b599d4c1f953393421ff13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205612354696245,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7208c5b7d40f8aa3c03c1126d6cb91
38,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=861fdb4b-4fe1-4bdf-bfb5-689ede585009 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.133744540Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68eac99f-f479-4c55-ab5d-20d24ae706b4 name=/runtime.v1.RuntimeService/Version
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.134018277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68eac99f-f479-4c55-ab5d-20d24ae706b4 name=/runtime.v1.RuntimeService/Version
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.135355749Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a46e1bd-05a1-445c-8fc5-0cf4877ddccf name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.135771492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206426135750370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a46e1bd-05a1-445c-8fc5-0cf4877ddccf name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.136378314Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1f49c18-db71-4bd5-8187-ec60f6ff841a name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.136449187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1f49c18-db71-4bd5-8187-ec60f6ff841a name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.136668812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205647958668257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a985fe79cee321de10b093785894484dea4774ac9b5f355dd06ebe57c17827,PodSandboxId:c58ca9cb975c25d98e7acc285319cf56bc4e55366e2f9b841463232d500fe1d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205627683143010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a6a1506-9e75-423a-adb9-274636a21dbc,},Annotations:map[string]string{io.kubernetes.container.hash: 58a7b55b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5,PodSandboxId:612194bb3c8cc600f93c1dca2e55ad9a08838af182bd42a3bb18e9331e126220,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205624851486132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x92p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241b883a-458a-4b16-99b3-03b91aec4342,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a0dc0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b,PodSandboxId:e75c8115e82907cd6cb78494eb19b9fbf5d87eb0fcbe6338b492117fb5a3dc56,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205617301716263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7ddtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50634b-c
e22-460b-a740-ed81a498ce63,},Annotations:map[string]string{io.kubernetes.container.hash: 80f9d4ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205617133769816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696
-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b,PodSandboxId:af86bdd9111b866df79c4a59bae6ab8fd55cdc56840863a420bcd04e0f2e85c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205612483128514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5027d9df0407cbbdbc0516584af670,},Annotations:map[
string]string{io.kubernetes.container.hash: c7679dfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0,PodSandboxId:e23f1ec5baab95f7e522cabf1bad9a571a492375491479b2721fa3311f9ae442,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205612469143347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb9cc3efa4f38e9f9e5600bc29a6c58,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 88a7cee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2,PodSandboxId:d8a3ed4cefcf516b0231aa18d63f5f2cfde7c20a2f13e9c868287f24db1d8d48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205612410212165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228cccb5a0212efb1b03f4774343
57bd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d,PodSandboxId:f88fbb0be85d45058ceed82bb421d053483d6bb29b599d4c1f953393421ff13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205612354696245,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7208c5b7d40f8aa3c03c1126d6cb91
38,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1f49c18-db71-4bd5-8187-ec60f6ff841a name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.175744515Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f218de5-d88a-4dd0-8fb9-d3f7e4a33073 name=/runtime.v1.RuntimeService/Version
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.175924351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f218de5-d88a-4dd0-8fb9-d3f7e4a33073 name=/runtime.v1.RuntimeService/Version
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.177448724Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86283646-f77c-4d62-b2e9-bda66b1f9200 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.178024402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206426177996200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86283646-f77c-4d62-b2e9-bda66b1f9200 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.178664531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9a48556-43ad-4946-be6f-0e1175e72a7f name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.178719141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9a48556-43ad-4946-be6f-0e1175e72a7f name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:00:26 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:00:26.178991550Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205647958668257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a985fe79cee321de10b093785894484dea4774ac9b5f355dd06ebe57c17827,PodSandboxId:c58ca9cb975c25d98e7acc285319cf56bc4e55366e2f9b841463232d500fe1d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205627683143010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a6a1506-9e75-423a-adb9-274636a21dbc,},Annotations:map[string]string{io.kubernetes.container.hash: 58a7b55b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5,PodSandboxId:612194bb3c8cc600f93c1dca2e55ad9a08838af182bd42a3bb18e9331e126220,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205624851486132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x92p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241b883a-458a-4b16-99b3-03b91aec4342,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a0dc0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b,PodSandboxId:e75c8115e82907cd6cb78494eb19b9fbf5d87eb0fcbe6338b492117fb5a3dc56,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205617301716263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7ddtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50634b-c
e22-460b-a740-ed81a498ce63,},Annotations:map[string]string{io.kubernetes.container.hash: 80f9d4ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205617133769816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696
-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b,PodSandboxId:af86bdd9111b866df79c4a59bae6ab8fd55cdc56840863a420bcd04e0f2e85c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205612483128514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5027d9df0407cbbdbc0516584af670,},Annotations:map[
string]string{io.kubernetes.container.hash: c7679dfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0,PodSandboxId:e23f1ec5baab95f7e522cabf1bad9a571a492375491479b2721fa3311f9ae442,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205612469143347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb9cc3efa4f38e9f9e5600bc29a6c58,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 88a7cee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2,PodSandboxId:d8a3ed4cefcf516b0231aa18d63f5f2cfde7c20a2f13e9c868287f24db1d8d48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205612410212165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228cccb5a0212efb1b03f4774343
57bd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d,PodSandboxId:f88fbb0be85d45058ceed82bb421d053483d6bb29b599d4c1f953393421ff13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205612354696245,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7208c5b7d40f8aa3c03c1126d6cb91
38,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9a48556-43ad-4946-be6f-0e1175e72a7f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5555dedde072c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   b8b38f0d9fc59       storage-provisioner
	66a985fe79cee       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   c58ca9cb975c2       busybox
	3fc749b082254       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   612194bb3c8cc       coredns-7db6d8ff4d-x92p9
	03a8b6ec44d3a       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago      Running             kube-proxy                1                   e75c8115e8290       kube-proxy-7ddtk
	254a32d2ee69b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   b8b38f0d9fc59       storage-provisioner
	82682648d9acf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   af86bdd9111b8       etcd-default-k8s-diff-port-668445
	c254befba5c81       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      13 minutes ago      Running             kube-apiserver            1                   e23f1ec5baab9       kube-apiserver-default-k8s-diff-port-668445
	78120ead890c0       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      13 minutes ago      Running             kube-controller-manager   1                   d8a3ed4cefcf5       kube-controller-manager-default-k8s-diff-port-668445
	7d443d30c89f0       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago      Running             kube-scheduler            1                   f88fbb0be85d4       kube-scheduler-default-k8s-diff-port-668445
	
	
	==> coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48005 - 25047 "HINFO IN 4496369672220611389.2769398303541995984. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011848464s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-668445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-668445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=default-k8s-diff-port-668445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T11_40_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:40:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-668445
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:00:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:57:39 +0000   Mon, 20 May 2024 11:40:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:57:39 +0000   Mon, 20 May 2024 11:40:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:57:39 +0000   Mon, 20 May 2024 11:40:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:57:39 +0000   Mon, 20 May 2024 11:47:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.221
	  Hostname:    default-k8s-diff-port-668445
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1236b9c875b407ebb2860324b3b9a70
	  System UUID:                f1236b9c-875b-407e-bb28-60324b3b9a70
	  Boot ID:                    1caa0d46-e61a-473b-8b88-a9e0961ee5d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-7db6d8ff4d-x92p9                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     19m
	  kube-system                 etcd-default-k8s-diff-port-668445                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-668445             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-668445    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-7ddtk                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-668445             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-569cc877fc-cdjmz                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-668445 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-668445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-668445 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-668445 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-668445 event: Registered Node default-k8s-diff-port-668445 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-668445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-668445 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-668445 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-668445 event: Registered Node default-k8s-diff-port-668445 in Controller
	
	
	==> dmesg <==
	[May20 11:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055883] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043727] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.682305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.497684] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.643593] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.586986] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.056080] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070230] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.196498] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.156427] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.305624] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +4.546332] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.063986] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.834246] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +5.585730] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.456168] systemd-fstab-generator[1554]: Ignoring "noauto" option for root device
	[May20 11:47] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.448553] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] <==
	{"level":"info","ts":"2024-05-20T11:46:53.198459Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.221:2380"}
	{"level":"info","ts":"2024-05-20T11:46:53.199051Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"42f5c41a9668f0c","initial-advertise-peer-urls":["https://192.168.61.221:2380"],"listen-peer-urls":["https://192.168.61.221:2380"],"advertise-client-urls":["https://192.168.61.221:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.221:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T11:46:53.202056Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T11:46:54.623397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42f5c41a9668f0c is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-20T11:46:54.623475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42f5c41a9668f0c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-20T11:46:54.623503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42f5c41a9668f0c received MsgPreVoteResp from 42f5c41a9668f0c at term 2"}
	{"level":"info","ts":"2024-05-20T11:46:54.623519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42f5c41a9668f0c became candidate at term 3"}
	{"level":"info","ts":"2024-05-20T11:46:54.623527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42f5c41a9668f0c received MsgVoteResp from 42f5c41a9668f0c at term 3"}
	{"level":"info","ts":"2024-05-20T11:46:54.623637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42f5c41a9668f0c became leader at term 3"}
	{"level":"info","ts":"2024-05-20T11:46:54.623656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 42f5c41a9668f0c elected leader 42f5c41a9668f0c at term 3"}
	{"level":"info","ts":"2024-05-20T11:46:54.636771Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"42f5c41a9668f0c","local-member-attributes":"{Name:default-k8s-diff-port-668445 ClientURLs:[https://192.168.61.221:2379]}","request-path":"/0/members/42f5c41a9668f0c/attributes","cluster-id":"7ec9508e87cf454c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T11:46:54.636875Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:46:54.637057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T11:46:54.637109Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T11:46:54.637269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:46:54.639668Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.221:2379"}
	{"level":"info","ts":"2024-05-20T11:46:54.639689Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T11:47:08.030214Z","caller":"traceutil/trace.go:171","msg":"trace[1386647512] linearizableReadLoop","detail":"{readStateIndex:622; appliedIndex:621; }","duration":"163.907753ms","start":"2024-05-20T11:47:07.866256Z","end":"2024-05-20T11:47:08.030164Z","steps":["trace[1386647512] 'read index received'  (duration: 163.660935ms)","trace[1386647512] 'applied index is now lower than readState.Index'  (duration: 246.108µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T11:47:08.030654Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.238378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-668445\" ","response":"range_response_count:1 size:5536"}
	{"level":"info","ts":"2024-05-20T11:47:08.030767Z","caller":"traceutil/trace.go:171","msg":"trace[976766791] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-668445; range_end:; response_count:1; response_revision:580; }","duration":"164.533741ms","start":"2024-05-20T11:47:07.866218Z","end":"2024-05-20T11:47:08.030752Z","steps":["trace[976766791] 'agreement among raft nodes before linearized reading'  (duration: 164.185489ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:47:08.030821Z","caller":"traceutil/trace.go:171","msg":"trace[428608829] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"170.470419ms","start":"2024-05-20T11:47:07.860327Z","end":"2024-05-20T11:47:08.030798Z","steps":["trace[428608829] 'process raft request'  (duration: 169.723152ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:47:11.546033Z","caller":"traceutil/trace.go:171","msg":"trace[749986633] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"123.151428ms","start":"2024-05-20T11:47:11.422478Z","end":"2024-05-20T11:47:11.54563Z","steps":["trace[749986633] 'process raft request'  (duration: 122.79367ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:56:54.666999Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":832}
	{"level":"info","ts":"2024-05-20T11:56:54.678177Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":832,"took":"10.846272ms","hash":1978745169,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2691072,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-05-20T11:56:54.678238Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1978745169,"revision":832,"compact-revision":-1}
	
	
	==> kernel <==
	 12:00:26 up 14 min,  0 users,  load average: 0.27, 0.11, 0.08
	Linux default-k8s-diff-port-668445 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] <==
	I0520 11:54:57.065178       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:56:56.065259       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:56:56.065428       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0520 11:56:57.066012       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:56:57.066105       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 11:56:57.066137       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:56:57.066107       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:56:57.066230       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 11:56:57.067282       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:57:57.067290       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:57:57.067601       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 11:57:57.067638       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:57:57.067573       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:57:57.067748       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 11:57:57.069523       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:59:57.068122       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:59:57.068415       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 11:59:57.068444       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:59:57.070458       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:59:57.070541       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 11:59:57.070549       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] <==
	I0520 11:54:39.727227       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:55:09.272577       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:55:09.739270       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:55:39.277571       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:55:39.747183       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:56:09.283169       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:56:09.756583       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:56:39.290343       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:56:39.765304       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:57:09.295407       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:57:09.773912       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:57:39.300559       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:57:39.781380       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0520 11:57:57.756402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="287.772µs"
	E0520 11:58:09.305773       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:58:09.789220       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0520 11:58:10.752046       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="55.217µs"
	E0520 11:58:39.313654       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:58:39.797557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:59:09.318859       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:59:09.805248       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:59:39.324700       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:59:39.813400       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:00:09.329922       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:00:09.820316       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] <==
	I0520 11:46:57.458834       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:46:57.472782       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.221"]
	I0520 11:46:57.508884       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:46:57.508922       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:46:57.508985       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:46:57.511590       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:46:57.511808       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:46:57.511839       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:46:57.513082       1 config.go:192] "Starting service config controller"
	I0520 11:46:57.513125       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:46:57.513148       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:46:57.513169       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:46:57.513669       1 config.go:319] "Starting node config controller"
	I0520 11:46:57.513696       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:46:57.613737       1 shared_informer.go:320] Caches are synced for node config
	I0520 11:46:57.613780       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 11:46:57.613879       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] <==
	I0520 11:46:53.835180       1 serving.go:380] Generated self-signed cert in-memory
	W0520 11:46:56.019468       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 11:46:56.019565       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 11:46:56.019575       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 11:46:56.019580       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 11:46:56.070776       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 11:46:56.070878       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:46:56.074681       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 11:46:56.074778       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:46:56.075416       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 11:46:56.075530       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 11:46:56.175578       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 11:57:51 default-k8s-diff-port-668445 kubelet[944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:57:51 default-k8s-diff-port-668445 kubelet[944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:57:51 default-k8s-diff-port-668445 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:57:57 default-k8s-diff-port-668445 kubelet[944]: E0520 11:57:57.737315     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 11:58:10 default-k8s-diff-port-668445 kubelet[944]: E0520 11:58:10.736880     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 11:58:22 default-k8s-diff-port-668445 kubelet[944]: E0520 11:58:22.736778     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 11:58:35 default-k8s-diff-port-668445 kubelet[944]: E0520 11:58:35.737644     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 11:58:48 default-k8s-diff-port-668445 kubelet[944]: E0520 11:58:48.737254     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 11:58:51 default-k8s-diff-port-668445 kubelet[944]: E0520 11:58:51.767680     944 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:58:51 default-k8s-diff-port-668445 kubelet[944]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:58:51 default-k8s-diff-port-668445 kubelet[944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:58:51 default-k8s-diff-port-668445 kubelet[944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:58:51 default-k8s-diff-port-668445 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:59:03 default-k8s-diff-port-668445 kubelet[944]: E0520 11:59:03.737775     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 11:59:15 default-k8s-diff-port-668445 kubelet[944]: E0520 11:59:15.741882     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 11:59:28 default-k8s-diff-port-668445 kubelet[944]: E0520 11:59:28.737574     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 11:59:39 default-k8s-diff-port-668445 kubelet[944]: E0520 11:59:39.737997     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 11:59:50 default-k8s-diff-port-668445 kubelet[944]: E0520 11:59:50.736574     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 11:59:51 default-k8s-diff-port-668445 kubelet[944]: E0520 11:59:51.770404     944 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:59:51 default-k8s-diff-port-668445 kubelet[944]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:59:51 default-k8s-diff-port-668445 kubelet[944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:59:51 default-k8s-diff-port-668445 kubelet[944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:59:51 default-k8s-diff-port-668445 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:00:03 default-k8s-diff-port-668445 kubelet[944]: E0520 12:00:03.739901     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 12:00:17 default-k8s-diff-port-668445 kubelet[944]: E0520 12:00:17.737705     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	
	
	==> storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] <==
	I0520 11:46:57.332090       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0520 11:47:27.336345       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] <==
	I0520 11:47:28.073533       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 11:47:28.084926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 11:47:28.085372       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 11:47:45.489018       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 11:47:45.489175       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-668445_687eb3ed-7683-441d-b908-0ab449304a7a!
	I0520 11:47:45.490212       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8226dea9-06fa-4f0d-9e1d-8213276187a6", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-668445_687eb3ed-7683-441d-b908-0ab449304a7a became leader
	I0520 11:47:45.590132       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-668445_687eb3ed-7683-441d-b908-0ab449304a7a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-668445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-cdjmz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-668445 describe pod metrics-server-569cc877fc-cdjmz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-668445 describe pod metrics-server-569cc877fc-cdjmz: exit status 1 (63.790343ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-cdjmz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-668445 describe pod metrics-server-569cc877fc-cdjmz: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-814599 -n no-preload-814599
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-05-20 12:01:33.70557334 +0000 UTC m=+6063.417456169
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-814599 -n no-preload-814599
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-814599 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-814599 logs -n 25: (2.163915753s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-854547                           | kubernetes-upgrade-854547    | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-814599             | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-120159                                        | pause-120159                 | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:39 UTC |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-793579 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | disable-driver-mounts-793579                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:41 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274976            | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-814599                  | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210420        | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-668445  | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC | 20 May 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274976                 | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210420             | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-668445       | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:44 UTC | 20 May 24 11:51 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:44:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:44:01.940145   58840 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:44:01.940398   58840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:44:01.940409   58840 out.go:304] Setting ErrFile to fd 2...
	I0520 11:44:01.940415   58840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:44:01.940633   58840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:44:01.941201   58840 out.go:298] Setting JSON to false
	I0520 11:44:01.942056   58840 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5185,"bootTime":1716200257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:44:01.942118   58840 start.go:139] virtualization: kvm guest
	I0520 11:44:01.944331   58840 out.go:177] * [default-k8s-diff-port-668445] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:44:01.945948   58840 notify.go:220] Checking for updates...
	I0520 11:44:01.945956   58840 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:44:01.947257   58840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:44:01.948525   58840 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:44:01.949881   58840 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:44:01.951156   58840 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:44:01.952736   58840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:44:01.954565   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:44:01.954932   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:44:01.954979   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:44:01.969623   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0520 11:44:01.969984   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:44:01.970444   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:44:01.970489   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:44:01.970816   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:44:01.971013   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:44:01.971212   58840 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:44:01.971493   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:44:01.971540   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:44:01.985556   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0520 11:44:01.985885   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:44:01.986365   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:44:01.986388   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:44:01.986707   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:44:01.986883   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:44:02.021778   58840 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 11:44:02.022944   58840 start.go:297] selected driver: kvm2
	I0520 11:44:02.022954   58840 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:44:02.023042   58840 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:44:02.023717   58840 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:44:02.023796   58840 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:44:02.038516   58840 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:44:02.038900   58840 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:44:02.038931   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:44:02.038942   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:44:02.038989   58840 start.go:340] cluster config:
	{Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:44:02.039097   58840 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:44:02.041645   58840 out.go:177] * Starting "default-k8s-diff-port-668445" primary control-plane node in "default-k8s-diff-port-668445" cluster
	I0520 11:44:02.042785   58840 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:44:02.042826   58840 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 11:44:02.042839   58840 cache.go:56] Caching tarball of preloaded images
	I0520 11:44:02.042917   58840 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:44:02.042929   58840 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 11:44:02.043041   58840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:44:02.043267   58840 start.go:360] acquireMachinesLock for default-k8s-diff-port-668445: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:44:06.605177   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:09.677144   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:15.757188   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:18.829221   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:24.909165   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:27.981180   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:34.061180   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:37.133203   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:43.213225   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:46.285226   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:52.365216   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:55.437121   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:01.517184   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:04.589206   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:10.669220   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:13.741264   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:19.821209   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:22.893127   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:28.973155   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:32.045217   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:41.129769   58316 start.go:364] duration metric: took 3m9.329208681s to acquireMachinesLock for "old-k8s-version-210420"
	I0520 11:45:41.129841   58316 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:45:41.129847   58316 fix.go:54] fixHost starting: 
	I0520 11:45:41.130178   58316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:45:41.130209   58316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:45:41.145527   58316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0520 11:45:41.145974   58316 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:45:41.146470   58316 main.go:141] libmachine: Using API Version  1
	I0520 11:45:41.146491   58316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:45:41.146803   58316 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:45:41.146962   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:41.147096   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetState
	I0520 11:45:41.148653   58316 fix.go:112] recreateIfNeeded on old-k8s-version-210420: state=Stopped err=<nil>
	I0520 11:45:41.148678   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	W0520 11:45:41.148835   58316 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:45:41.150515   58316 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210420" ...
	I0520 11:45:38.125195   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:41.127376   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:41.127411   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:45:41.127706   57519 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:45:41.127739   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:45:41.127954   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:45:41.129638   57519 machine.go:97] duration metric: took 4m44.663213682s to provisionDockerMachine
	I0520 11:45:41.129681   57519 fix.go:56] duration metric: took 4m44.685919857s for fixHost
	I0520 11:45:41.129692   57519 start.go:83] releasing machines lock for "no-preload-814599", held for 4m44.68594997s
	W0520 11:45:41.129717   57519 start.go:713] error starting host: provision: host is not running
	W0520 11:45:41.129807   57519 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0520 11:45:41.129817   57519 start.go:728] Will try again in 5 seconds ...
	I0520 11:45:41.151697   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .Start
	I0520 11:45:41.151882   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring networks are active...
	I0520 11:45:41.152702   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network default is active
	I0520 11:45:41.153047   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network mk-old-k8s-version-210420 is active
	I0520 11:45:41.153452   58316 main.go:141] libmachine: (old-k8s-version-210420) Getting domain xml...
	I0520 11:45:41.154219   58316 main.go:141] libmachine: (old-k8s-version-210420) Creating domain...
	I0520 11:45:46.134344   57519 start.go:360] acquireMachinesLock for no-preload-814599: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:45:42.344294   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting to get IP...
	I0520 11:45:42.345102   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.345496   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.345567   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.345490   59217 retry.go:31] will retry after 272.13676ms: waiting for machine to come up
	I0520 11:45:42.619253   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.619676   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.619706   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.619633   59217 retry.go:31] will retry after 374.178023ms: waiting for machine to come up
	I0520 11:45:42.995376   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.995793   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.995819   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.995748   59217 retry.go:31] will retry after 462.060487ms: waiting for machine to come up
	I0520 11:45:43.459411   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.459824   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.459860   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.459791   59217 retry.go:31] will retry after 479.382552ms: waiting for machine to come up
	I0520 11:45:43.940363   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.940850   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.940873   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.940802   59217 retry.go:31] will retry after 633.680965ms: waiting for machine to come up
	I0520 11:45:44.575531   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:44.575957   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:44.575985   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:44.575915   59217 retry.go:31] will retry after 755.049677ms: waiting for machine to come up
	I0520 11:45:45.332864   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:45.333335   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:45.333368   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:45.333292   59217 retry.go:31] will retry after 757.292563ms: waiting for machine to come up
	I0520 11:45:46.092128   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:46.092583   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:46.092605   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:46.092544   59217 retry.go:31] will retry after 1.150182695s: waiting for machine to come up
	I0520 11:45:47.244071   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:47.244526   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:47.244588   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:47.244474   59217 retry.go:31] will retry after 1.316447926s: waiting for machine to come up
	I0520 11:45:48.562914   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:48.563296   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:48.563326   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:48.563240   59217 retry.go:31] will retry after 1.615920757s: waiting for machine to come up
	I0520 11:45:50.180999   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:50.181361   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:50.181395   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:50.181336   59217 retry.go:31] will retry after 2.441112889s: waiting for machine to come up
	I0520 11:45:52.624374   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:52.624739   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:52.624771   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:52.624722   59217 retry.go:31] will retry after 2.505225423s: waiting for machine to come up
	I0520 11:45:55.133339   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:55.133626   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:55.133655   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:55.133596   59217 retry.go:31] will retry after 4.197262735s: waiting for machine to come up
	I0520 11:46:00.745820   58398 start.go:364] duration metric: took 3m22.67445444s to acquireMachinesLock for "embed-certs-274976"
	I0520 11:46:00.745877   58398 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:00.745887   58398 fix.go:54] fixHost starting: 
	I0520 11:46:00.746287   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:00.746320   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:00.762678   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45595
	I0520 11:46:00.763132   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:00.763654   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:00.763681   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:00.763991   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:00.764154   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:00.764308   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:00.765826   58398 fix.go:112] recreateIfNeeded on embed-certs-274976: state=Stopped err=<nil>
	I0520 11:46:00.765861   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	W0520 11:46:00.766017   58398 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:00.768362   58398 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274976" ...
	I0520 11:45:59.334216   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334678   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has current primary IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334700   58316 main.go:141] libmachine: (old-k8s-version-210420) Found IP for machine: 192.168.83.100
	I0520 11:45:59.334712   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserving static IP address...
	I0520 11:45:59.335134   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.335166   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | skip adding static IP to network mk-old-k8s-version-210420 - found existing host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"}
	I0520 11:45:59.335180   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserved static IP address: 192.168.83.100
	I0520 11:45:59.335197   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting for SSH to be available...
	I0520 11:45:59.335212   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Getting to WaitForSSH function...
	I0520 11:45:59.337206   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337476   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.337501   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337634   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH client type: external
	I0520 11:45:59.337681   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa (-rw-------)
	I0520 11:45:59.337716   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:45:59.337732   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | About to run SSH command:
	I0520 11:45:59.337743   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | exit 0
	I0520 11:45:59.460960   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | SSH cmd err, output: <nil>: 
	I0520 11:45:59.461347   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:45:59.461978   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.464416   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.464734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.464756   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.465018   58316 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:45:59.465183   58316 machine.go:94] provisionDockerMachine start ...
	I0520 11:45:59.465201   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:59.465416   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.467457   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467751   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.467782   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467892   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.468050   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468204   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468330   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.468474   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.468655   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.468665   58316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:45:59.573237   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:45:59.573264   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573558   58316 buildroot.go:166] provisioning hostname "old-k8s-version-210420"
	I0520 11:45:59.573586   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.576245   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.576770   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.576778   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576957   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577122   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577267   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.577413   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.577620   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.577639   58316 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210420 && echo "old-k8s-version-210420" | sudo tee /etc/hostname
	I0520 11:45:59.695529   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210420
	
	I0520 11:45:59.695573   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.698545   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698808   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.698835   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698964   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.699144   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699331   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699427   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.699594   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.699744   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.699767   58316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:45:59.814736   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:59.814765   58316 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:45:59.814820   58316 buildroot.go:174] setting up certificates
	I0520 11:45:59.814832   58316 provision.go:84] configureAuth start
	I0520 11:45:59.814841   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.815158   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.817591   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.817917   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.817946   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.818103   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.820205   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820474   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.820499   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820625   58316 provision.go:143] copyHostCerts
	I0520 11:45:59.820681   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:45:59.820698   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:45:59.820782   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:45:59.820887   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:45:59.820896   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:45:59.820921   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:45:59.821006   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:45:59.821015   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:45:59.821038   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:45:59.821092   58316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210420 san=[127.0.0.1 192.168.83.100 localhost minikube old-k8s-version-210420]
	I0520 11:46:00.093158   58316 provision.go:177] copyRemoteCerts
	I0520 11:46:00.093218   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:00.093251   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.095725   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096033   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.096055   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096180   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.096371   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.096549   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.096693   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.179313   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:00.203235   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 11:46:00.226670   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:00.250832   58316 provision.go:87] duration metric: took 435.987724ms to configureAuth
	I0520 11:46:00.250868   58316 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:00.251054   58316 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:46:00.251183   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.253679   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.253986   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.254020   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.254118   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.254305   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254489   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254626   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.254777   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.255000   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.255023   58316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:00.512174   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:00.512207   58316 machine.go:97] duration metric: took 1.047011014s to provisionDockerMachine
	I0520 11:46:00.512222   58316 start.go:293] postStartSetup for "old-k8s-version-210420" (driver="kvm2")
	I0520 11:46:00.512236   58316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:00.512259   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.512546   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:00.512577   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.515025   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515358   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.515386   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515512   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.515696   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.515879   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.516004   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.599956   58316 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:00.604374   58316 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:00.604399   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:00.604486   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:00.604575   58316 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:00.604704   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:00.614371   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:00.638093   58316 start.go:296] duration metric: took 125.858293ms for postStartSetup
	I0520 11:46:00.638127   58316 fix.go:56] duration metric: took 19.508279908s for fixHost
	I0520 11:46:00.638151   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.640770   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641148   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.641192   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641352   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.641539   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641702   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641844   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.641985   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.642192   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.642207   58316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:00.745686   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205560.721975623
	
	I0520 11:46:00.745708   58316 fix.go:216] guest clock: 1716205560.721975623
	I0520 11:46:00.745716   58316 fix.go:229] Guest: 2024-05-20 11:46:00.721975623 +0000 UTC Remote: 2024-05-20 11:46:00.638130573 +0000 UTC m=+208.973304879 (delta=83.84505ms)
	I0520 11:46:00.745736   58316 fix.go:200] guest clock delta is within tolerance: 83.84505ms
	I0520 11:46:00.745740   58316 start.go:83] releasing machines lock for "old-k8s-version-210420", held for 19.615924008s
	I0520 11:46:00.745762   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.746032   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:00.748741   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749123   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.749146   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749288   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749932   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.750021   58316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:00.750070   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.750200   58316 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:00.750225   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.752648   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.752923   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753019   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753045   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753156   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753244   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753273   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753343   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753420   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753519   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753582   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753650   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.753744   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753879   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	W0520 11:46:00.839633   58316 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:00.839741   58316 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:00.861801   58316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:01.011737   58316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:01.018898   58316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:01.018964   58316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:01.035600   58316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:01.035622   58316 start.go:494] detecting cgroup driver to use...
	I0520 11:46:01.035698   58316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:01.051692   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:01.065994   58316 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:01.066083   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:01.079249   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:01.093862   58316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:01.206394   58316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:01.362764   58316 docker.go:233] disabling docker service ...
	I0520 11:46:01.362842   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:01.378012   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:01.393231   58316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:01.535932   58316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:01.677217   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:01.691602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:01.716408   58316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 11:46:01.716457   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.729460   58316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:01.729541   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.740455   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.751846   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.763174   58316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:01.775353   58316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:01.786021   58316 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:01.786088   58316 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:01.801529   58316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:01.812523   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:01.927496   58316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:02.084059   58316 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:02.084137   58316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:02.089878   58316 start.go:562] Will wait 60s for crictl version
	I0520 11:46:02.089925   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:02.094050   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:02.134859   58316 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:02.134936   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.166705   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.196374   58316 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 11:46:00.769810   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Start
	I0520 11:46:00.769988   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring networks are active...
	I0520 11:46:00.770615   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring network default is active
	I0520 11:46:00.770928   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring network mk-embed-certs-274976 is active
	I0520 11:46:00.771273   58398 main.go:141] libmachine: (embed-certs-274976) Getting domain xml...
	I0520 11:46:00.771977   58398 main.go:141] libmachine: (embed-certs-274976) Creating domain...
	I0520 11:46:02.023816   58398 main.go:141] libmachine: (embed-certs-274976) Waiting to get IP...
	I0520 11:46:02.024616   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.025031   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.025111   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.025009   59335 retry.go:31] will retry after 218.274206ms: waiting for machine to come up
	I0520 11:46:02.244661   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.245135   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.245163   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.245110   59335 retry.go:31] will retry after 359.160276ms: waiting for machine to come up
	I0520 11:46:02.606589   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.607106   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.607129   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.607044   59335 retry.go:31] will retry after 462.479953ms: waiting for machine to come up
	I0520 11:46:02.197655   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:02.200774   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201186   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:02.201211   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201385   58316 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:02.205589   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:02.217983   58316 kubeadm.go:877] updating cluster {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:02.218091   58316 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:46:02.218135   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:02.265907   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:02.265970   58316 ssh_runner.go:195] Run: which lz4
	I0520 11:46:02.271466   58316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:02.276993   58316 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:02.277017   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 11:46:04.082571   58316 crio.go:462] duration metric: took 1.811145307s to copy over tarball
	I0520 11:46:04.082648   58316 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:03.071686   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:03.072242   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:03.072271   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:03.072198   59335 retry.go:31] will retry after 531.663033ms: waiting for machine to come up
	I0520 11:46:03.606121   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:03.606629   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:03.606653   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:03.606581   59335 retry.go:31] will retry after 690.866851ms: waiting for machine to come up
	I0520 11:46:04.299326   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:04.299799   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:04.299829   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:04.299750   59335 retry.go:31] will retry after 928.456543ms: waiting for machine to come up
	I0520 11:46:05.230115   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:05.230568   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:05.230586   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:05.230539   59335 retry.go:31] will retry after 761.597702ms: waiting for machine to come up
	I0520 11:46:05.993511   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:05.993897   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:05.993930   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:05.993854   59335 retry.go:31] will retry after 1.405329972s: waiting for machine to come up
	I0520 11:46:07.401706   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:07.402246   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:07.402312   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:07.402223   59335 retry.go:31] will retry after 1.717177668s: waiting for machine to come up
	I0520 11:46:06.952510   58316 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.869830998s)
	I0520 11:46:06.952533   58316 crio.go:469] duration metric: took 2.869936445s to extract the tarball
	I0520 11:46:06.952541   58316 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:06.996387   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:07.032067   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:07.032093   58316 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:46:07.032230   58316 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.032216   58316 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 11:46:07.032350   58316 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.032411   58316 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.032692   58316 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.033198   58316 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034239   58316 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.034327   58316 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 11:46:07.034404   58316 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.034442   58316 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.207835   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.209772   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 11:46:07.278794   58316 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 11:46:07.278843   58316 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.278895   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.284049   58316 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 11:46:07.284098   58316 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 11:46:07.284149   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.286345   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.289361   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 11:46:07.331714   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 11:46:07.337994   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 11:46:07.373533   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.380715   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.385188   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.390552   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.410661   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.481535   58316 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 11:46:07.481576   58316 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.481626   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494357   58316 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 11:46:07.494374   58316 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 11:46:07.494402   58316 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.494404   58316 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.507981   58316 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 11:46:07.508023   58316 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.508069   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.516898   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.517006   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.517019   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.517060   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.517248   58316 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 11:46:07.517282   58316 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.517309   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.581093   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.581200   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 11:46:07.622963   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 11:46:07.623008   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 11:46:07.623069   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 11:46:07.632151   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 11:46:07.915373   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:08.058938   58316 cache_images.go:92] duration metric: took 1.026816798s to LoadCachedImages
	W0520 11:46:08.059038   58316 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0520 11:46:08.059057   58316 kubeadm.go:928] updating node { 192.168.83.100 8443 v1.20.0 crio true true} ...
	I0520 11:46:08.059208   58316 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:08.059295   58316 ssh_runner.go:195] Run: crio config
	I0520 11:46:08.131486   58316 cni.go:84] Creating CNI manager for ""
	I0520 11:46:08.131511   58316 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:08.131532   58316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:08.131561   58316 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.100 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210420 NodeName:old-k8s-version-210420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:46:08.131724   58316 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210420"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:08.131800   58316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:46:08.143109   58316 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:08.143202   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:08.153988   58316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0520 11:46:08.172321   58316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:08.190157   58316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0520 11:46:08.208084   58316 ssh_runner.go:195] Run: grep 192.168.83.100	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:08.212211   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:08.225134   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:08.350083   58316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:08.368185   58316 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420 for IP: 192.168.83.100
	I0520 11:46:08.368202   58316 certs.go:194] generating shared ca certs ...
	I0520 11:46:08.368216   58316 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.368372   58316 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:08.368422   58316 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:08.368435   58316 certs.go:256] generating profile certs ...
	I0520 11:46:08.368552   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key
	I0520 11:46:08.368620   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9
	I0520 11:46:08.368664   58316 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key
	I0520 11:46:08.368806   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:08.368839   58316 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:08.368852   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:08.368884   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:08.368914   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:08.368967   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:08.369020   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:08.369674   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:08.402860   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:08.432963   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:08.458740   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:08.490622   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 11:46:08.529627   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:08.563064   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:08.604332   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 11:46:08.632006   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:08.656708   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:08.680552   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:08.704942   58316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:08.723543   58316 ssh_runner.go:195] Run: openssl version
	I0520 11:46:08.729776   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:08.742608   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747374   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747417   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.753504   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:08.765976   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:08.777365   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781823   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.787550   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:08.798382   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:08.810174   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814810   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.820897   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:08.832012   58316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:08.836507   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:08.842727   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:08.849208   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:08.855730   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:08.862050   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:08.867819   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:08.873633   58316 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:08.873750   58316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:08.873800   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.912922   58316 cri.go:89] found id: ""
	I0520 11:46:08.913010   58316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:08.924046   58316 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:08.924084   58316 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:08.924092   58316 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:08.924154   58316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:08.934812   58316 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:08.936218   58316 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210420" does not appear in /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:08.937232   58316 kubeconfig.go:62] /home/jenkins/minikube-integration/18925-3819/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210420" cluster setting kubeconfig missing "old-k8s-version-210420" context setting]
	I0520 11:46:08.938681   58316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.947598   58316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:08.958245   58316 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.100
	I0520 11:46:08.958280   58316 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:08.958293   58316 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:08.958349   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.996333   58316 cri.go:89] found id: ""
	I0520 11:46:08.996413   58316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:09.015359   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:09.026711   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:09.026733   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:09.026784   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:09.036886   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:09.036954   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:09.046584   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:09.056035   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:09.056095   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:09.067501   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.077064   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:09.077123   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.086820   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:09.096267   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:09.096346   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:09.106407   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:09.116143   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.240367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.968603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.251776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.363870   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.466177   58316 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:10.466284   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:10.967012   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:11.467113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:09.121909   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:09.122323   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:09.122358   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:09.122261   59335 retry.go:31] will retry after 1.49678652s: waiting for machine to come up
	I0520 11:46:10.620875   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:10.621226   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:10.621255   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:10.621184   59335 retry.go:31] will retry after 2.772139112s: waiting for machine to come up
	I0520 11:46:11.966365   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.466509   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.966500   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.467280   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.966838   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.466319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.966768   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.466848   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.966953   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:16.466890   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.396309   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:13.396705   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:13.396725   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:13.396678   59335 retry.go:31] will retry after 2.987909039s: waiting for machine to come up
	I0520 11:46:16.386007   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:16.386500   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:16.386527   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:16.386460   59335 retry.go:31] will retry after 4.351760111s: waiting for machine to come up
	I0520 11:46:16.967036   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.466679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.966631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.466439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.966783   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.967319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.466315   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.966513   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:21.467069   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.138009   58840 start.go:364] duration metric: took 2m20.094710429s to acquireMachinesLock for "default-k8s-diff-port-668445"
	I0520 11:46:22.138085   58840 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:22.138096   58840 fix.go:54] fixHost starting: 
	I0520 11:46:22.138535   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:22.138577   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:22.154345   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0520 11:46:22.154701   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:22.155172   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:22.155198   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:22.155492   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:22.155667   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:22.155834   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:22.157486   58840 fix.go:112] recreateIfNeeded on default-k8s-diff-port-668445: state=Stopped err=<nil>
	I0520 11:46:22.157525   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	W0520 11:46:22.157703   58840 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:22.159724   58840 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-668445" ...
	I0520 11:46:20.742954   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.743480   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has current primary IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.743503   58398 main.go:141] libmachine: (embed-certs-274976) Found IP for machine: 192.168.50.167
	I0520 11:46:20.743517   58398 main.go:141] libmachine: (embed-certs-274976) Reserving static IP address...
	I0520 11:46:20.743907   58398 main.go:141] libmachine: (embed-certs-274976) Reserved static IP address: 192.168.50.167
	I0520 11:46:20.743930   58398 main.go:141] libmachine: (embed-certs-274976) Waiting for SSH to be available...
	I0520 11:46:20.743955   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "embed-certs-274976", mac: "52:54:00:8b:95:29", ip: "192.168.50.167"} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.744000   58398 main.go:141] libmachine: (embed-certs-274976) DBG | skip adding static IP to network mk-embed-certs-274976 - found existing host DHCP lease matching {name: "embed-certs-274976", mac: "52:54:00:8b:95:29", ip: "192.168.50.167"}
	I0520 11:46:20.744017   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Getting to WaitForSSH function...
	I0520 11:46:20.746685   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.747071   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.747112   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.747244   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Using SSH client type: external
	I0520 11:46:20.747259   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa (-rw-------)
	I0520 11:46:20.747278   58398 main.go:141] libmachine: (embed-certs-274976) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:46:20.747287   58398 main.go:141] libmachine: (embed-certs-274976) DBG | About to run SSH command:
	I0520 11:46:20.747295   58398 main.go:141] libmachine: (embed-certs-274976) DBG | exit 0
	I0520 11:46:20.872997   58398 main.go:141] libmachine: (embed-certs-274976) DBG | SSH cmd err, output: <nil>: 
	I0520 11:46:20.873377   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetConfigRaw
	I0520 11:46:20.874039   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:20.876476   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.876827   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.876859   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.877169   58398 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/config.json ...
	I0520 11:46:20.877407   58398 machine.go:94] provisionDockerMachine start ...
	I0520 11:46:20.877432   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:20.877643   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:20.879744   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.880089   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.880113   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.880328   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:20.880529   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.880673   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.880840   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:20.881034   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:20.881263   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:20.881275   58398 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:46:20.993535   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:46:20.993572   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:20.993813   58398 buildroot.go:166] provisioning hostname "embed-certs-274976"
	I0520 11:46:20.993841   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:20.994034   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:20.996567   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.996922   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.996970   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.997092   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:20.997280   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.997437   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.997580   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:20.997734   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:20.997993   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:20.998013   58398 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274976 && echo "embed-certs-274976" | sudo tee /etc/hostname
	I0520 11:46:21.125482   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274976
	
	I0520 11:46:21.125512   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.128347   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.128717   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.128746   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.128916   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.129120   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.129296   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.129436   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.129613   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:21.129774   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:21.129789   58398 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274976/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:46:21.250287   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:46:21.250320   58398 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:46:21.250371   58398 buildroot.go:174] setting up certificates
	I0520 11:46:21.250385   58398 provision.go:84] configureAuth start
	I0520 11:46:21.250403   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:21.250726   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:21.253610   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.254038   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.254053   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.254303   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.256633   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.257007   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.257046   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.257206   58398 provision.go:143] copyHostCerts
	I0520 11:46:21.257266   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:46:21.257289   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:46:21.257369   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:46:21.257489   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:46:21.257499   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:46:21.257537   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:46:21.257617   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:46:21.257626   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:46:21.257661   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:46:21.257729   58398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274976 san=[127.0.0.1 192.168.50.167 embed-certs-274976 localhost minikube]
	I0520 11:46:21.458096   58398 provision.go:177] copyRemoteCerts
	I0520 11:46:21.458152   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:21.458178   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.460662   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.461006   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.461042   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.461205   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.461370   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.461545   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.461705   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:21.547703   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:21.574345   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 11:46:21.598843   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:21.624064   58398 provision.go:87] duration metric: took 373.664517ms to configureAuth
	I0520 11:46:21.624091   58398 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:21.624262   58398 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:21.624330   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.626996   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.627378   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.627406   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.627556   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.627711   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.627902   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.628077   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.628231   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:21.628402   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:21.628420   58398 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:21.894896   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:21.894923   58398 machine.go:97] duration metric: took 1.017499149s to provisionDockerMachine
	I0520 11:46:21.894936   58398 start.go:293] postStartSetup for "embed-certs-274976" (driver="kvm2")
	I0520 11:46:21.894948   58398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:21.894966   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:21.895279   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:21.895304   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.897866   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.898160   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.898193   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.898292   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.898484   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.898663   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.898779   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:21.984571   58398 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:21.988891   58398 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:21.988918   58398 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:21.988995   58398 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:21.989129   58398 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:21.989257   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:21.999219   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:22.023032   58398 start.go:296] duration metric: took 128.082095ms for postStartSetup
	I0520 11:46:22.023076   58398 fix.go:56] duration metric: took 21.277188857s for fixHost
	I0520 11:46:22.023094   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.025971   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.026362   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.026393   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.026536   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.026725   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.026867   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.027017   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.027246   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:22.027424   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:22.027437   58398 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:22.137813   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205582.113779638
	
	I0520 11:46:22.137840   58398 fix.go:216] guest clock: 1716205582.113779638
	I0520 11:46:22.137850   58398 fix.go:229] Guest: 2024-05-20 11:46:22.113779638 +0000 UTC Remote: 2024-05-20 11:46:22.023079043 +0000 UTC m=+224.083510648 (delta=90.700595ms)
	I0520 11:46:22.137898   58398 fix.go:200] guest clock delta is within tolerance: 90.700595ms
	I0520 11:46:22.137906   58398 start.go:83] releasing machines lock for "embed-certs-274976", held for 21.39205527s
	I0520 11:46:22.137943   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.138222   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:22.140951   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.141358   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.141388   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.141558   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142024   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142187   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142245   58398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:22.142291   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.142395   58398 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:22.142417   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.144896   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145116   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145386   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.145419   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145490   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.145518   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145525   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.145647   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.145703   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.145827   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.145849   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.146019   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.146022   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:22.146175   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	W0520 11:46:22.226345   58398 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:22.226421   58398 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:22.248678   58398 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:22.404298   58398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:22.410174   58398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:22.410249   58398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:22.426286   58398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:22.426311   58398 start.go:494] detecting cgroup driver to use...
	I0520 11:46:22.426369   58398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:22.441746   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:22.455445   58398 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:22.455515   58398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:22.469441   58398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:22.485963   58398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:22.604443   58398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:22.758542   58398 docker.go:233] disabling docker service ...
	I0520 11:46:22.758628   58398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:22.773371   58398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:22.786871   58398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:22.940022   58398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:23.076857   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:23.091980   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:23.111380   58398 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:46:23.111434   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.122532   58398 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:23.122597   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.134355   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.145484   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.156776   58398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:23.169017   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.180681   58398 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.205664   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.217267   58398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:23.228490   58398 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:23.228555   58398 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:23.243578   58398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:23.255394   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:23.387407   58398 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:23.537543   58398 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:23.537613   58398 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:23.542886   58398 start.go:562] Will wait 60s for crictl version
	I0520 11:46:23.542934   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:46:23.546740   58398 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:23.587662   58398 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:23.587746   58398 ssh_runner.go:195] Run: crio --version
	I0520 11:46:23.615735   58398 ssh_runner.go:195] Run: crio --version
	I0520 11:46:23.648727   58398 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:46:21.966776   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.466411   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.967037   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.467218   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.966606   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.466990   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.966805   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.966829   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:26.466968   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.161646   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Start
	I0520 11:46:22.161813   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring networks are active...
	I0520 11:46:22.162609   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring network default is active
	I0520 11:46:22.163020   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring network mk-default-k8s-diff-port-668445 is active
	I0520 11:46:22.163509   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Getting domain xml...
	I0520 11:46:22.164146   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Creating domain...
	I0520 11:46:23.456104   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting to get IP...
	I0520 11:46:23.457157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.457703   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.457735   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:23.457652   59511 retry.go:31] will retry after 262.047261ms: waiting for machine to come up
	I0520 11:46:23.721329   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.721756   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.721815   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:23.721742   59511 retry.go:31] will retry after 301.868596ms: waiting for machine to come up
	I0520 11:46:24.025477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.026042   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.026072   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:24.026006   59511 retry.go:31] will retry after 431.470276ms: waiting for machine to come up
	I0520 11:46:24.458962   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.459477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.459564   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:24.459450   59511 retry.go:31] will retry after 566.093902ms: waiting for machine to come up
	I0520 11:46:25.027215   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.027787   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.027812   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:25.027729   59511 retry.go:31] will retry after 471.2178ms: waiting for machine to come up
	I0520 11:46:25.500575   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.501067   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.501106   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:25.501010   59511 retry.go:31] will retry after 682.542082ms: waiting for machine to come up
	I0520 11:46:26.184886   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:26.185367   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:26.185400   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:26.185315   59511 retry.go:31] will retry after 973.33325ms: waiting for machine to come up
	I0520 11:46:23.650320   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:23.653452   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:23.653919   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:23.653962   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:23.654259   58398 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:23.658592   58398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:23.671310   58398 kubeadm.go:877] updating cluster {Name:embed-certs-274976 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:23.671460   58398 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:46:23.671523   58398 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:23.715873   58398 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:46:23.715947   58398 ssh_runner.go:195] Run: which lz4
	I0520 11:46:23.720362   58398 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:23.724821   58398 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:23.724852   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:46:25.254824   58398 crio.go:462] duration metric: took 1.534501958s to copy over tarball
	I0520 11:46:25.254937   58398 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:27.527739   58398 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.272764169s)
	I0520 11:46:27.527773   58398 crio.go:469] duration metric: took 2.272914971s to extract the tarball
	I0520 11:46:27.527780   58398 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:27.564738   58398 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:27.613196   58398 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:46:27.613220   58398 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:46:27.613229   58398 kubeadm.go:928] updating node { 192.168.50.167 8443 v1.30.1 crio true true} ...
	I0520 11:46:27.613321   58398 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:27.613381   58398 ssh_runner.go:195] Run: crio config
	I0520 11:46:27.669907   58398 cni.go:84] Creating CNI manager for ""
	I0520 11:46:27.669931   58398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:27.669948   58398 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:27.669969   58398 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.167 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274976 NodeName:embed-certs-274976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:46:27.670117   58398 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274976"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:27.670180   58398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:46:27.680414   58398 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:27.680473   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:27.690431   58398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0520 11:46:27.707625   58398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:27.724120   58398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0520 11:46:27.744218   58398 ssh_runner.go:195] Run: grep 192.168.50.167	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:27.748173   58398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:27.760752   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:27.891061   58398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:27.909326   58398 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976 for IP: 192.168.50.167
	I0520 11:46:27.909354   58398 certs.go:194] generating shared ca certs ...
	I0520 11:46:27.909387   58398 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:27.909574   58398 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:27.909637   58398 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:27.909650   58398 certs.go:256] generating profile certs ...
	I0520 11:46:27.909754   58398 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/client.key
	I0520 11:46:27.909843   58398 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.key.d6be4a1f
	I0520 11:46:27.909922   58398 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.key
	I0520 11:46:27.910072   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:27.910127   58398 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:27.910146   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:27.910183   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:27.910208   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:27.910229   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:27.910278   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:27.911030   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:27.949250   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:26.966832   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.466585   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.967225   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.466286   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.966679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.466425   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.966440   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.467175   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.966457   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.466596   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.160027   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:27.160492   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:27.160515   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:27.160435   59511 retry.go:31] will retry after 1.263563226s: waiting for machine to come up
	I0520 11:46:28.426016   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:28.426503   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:28.426555   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:28.426463   59511 retry.go:31] will retry after 1.407211588s: waiting for machine to come up
	I0520 11:46:29.835680   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:29.836101   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:29.836121   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:29.836058   59511 retry.go:31] will retry after 2.320334078s: waiting for machine to come up
	I0520 11:46:27.987645   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:28.036652   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:28.083998   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0520 11:46:28.116104   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:28.154695   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:28.181013   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:46:28.213382   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:28.243550   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:28.268537   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:28.292856   58398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:28.310801   58398 ssh_runner.go:195] Run: openssl version
	I0520 11:46:28.316724   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:28.327461   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.332286   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.332345   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.339005   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:28.351192   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:28.362639   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.367566   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.367623   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.373275   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:28.384549   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:28.395600   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.400136   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.400188   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.405914   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:28.417533   58398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:28.422327   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:28.428746   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:28.434840   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:28.441227   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:28.447310   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:28.453450   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:28.462637   58398 kubeadm.go:391] StartCluster: {Name:embed-certs-274976 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:28.462746   58398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:28.462828   58398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:28.501759   58398 cri.go:89] found id: ""
	I0520 11:46:28.501826   58398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:28.512536   58398 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:28.512554   58398 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:28.512559   58398 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:28.512603   58398 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:28.522288   58398 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:28.523266   58398 kubeconfig.go:125] found "embed-certs-274976" server: "https://192.168.50.167:8443"
	I0520 11:46:28.525188   58398 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:28.535056   58398 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.167
	I0520 11:46:28.535092   58398 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:28.535104   58398 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:28.535157   58398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:28.573317   58398 cri.go:89] found id: ""
	I0520 11:46:28.573429   58398 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:28.593111   58398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:28.603312   58398 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:28.603345   58398 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:28.603398   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:28.612794   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:28.612870   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:28.622819   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:28.634550   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:28.634614   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:28.644568   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:28.654918   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:28.654980   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:28.665581   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:28.677090   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:28.677167   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:28.687152   58398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:28.697505   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:28.833183   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:29.963435   58398 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.130211655s)
	I0520 11:46:29.963486   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.185882   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.248946   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.334924   58398 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:30.335020   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.835637   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.336028   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.354314   58398 api_server.go:72] duration metric: took 1.01937754s to wait for apiserver process to appear ...
	I0520 11:46:31.354343   58398 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:46:31.354365   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:31.354916   58398 api_server.go:269] stopped: https://192.168.50.167:8443/healthz: Get "https://192.168.50.167:8443/healthz": dial tcp 192.168.50.167:8443: connect: connection refused
	I0520 11:46:31.854629   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.299609   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.299653   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.299669   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.370279   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.370304   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.370317   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.378426   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.378448   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.855034   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.860164   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:34.860205   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:35.354583   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:35.362461   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:35.362489   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:35.854836   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:35.859322   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I0520 11:46:35.867050   58398 api_server.go:141] control plane version: v1.30.1
	I0520 11:46:35.867077   58398 api_server.go:131] duration metric: took 4.512726716s to wait for apiserver health ...
	I0520 11:46:35.867088   58398 cni.go:84] Creating CNI manager for ""
	I0520 11:46:35.867096   58398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:35.869427   58398 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:46:31.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.467311   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.966392   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.967113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.467393   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.966336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.466951   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.967302   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:36.467339   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.158195   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:32.158685   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:32.158712   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:32.158637   59511 retry.go:31] will retry after 2.733920459s: waiting for machine to come up
	I0520 11:46:34.893892   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:34.894436   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:34.894480   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:34.894374   59511 retry.go:31] will retry after 3.119057685s: waiting for machine to come up
	I0520 11:46:35.871058   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:46:35.898211   58398 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:46:35.942295   58398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:46:35.954120   58398 system_pods.go:59] 8 kube-system pods found
	I0520 11:46:35.954156   58398 system_pods.go:61] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:46:35.954171   58398 system_pods.go:61] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:46:35.954181   58398 system_pods.go:61] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:46:35.954194   58398 system_pods.go:61] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:46:35.954204   58398 system_pods.go:61] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0520 11:46:35.954213   58398 system_pods.go:61] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:46:35.954220   58398 system_pods.go:61] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:46:35.954229   58398 system_pods.go:61] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 11:46:35.954239   58398 system_pods.go:74] duration metric: took 11.924168ms to wait for pod list to return data ...
	I0520 11:46:35.954249   58398 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:46:35.957627   58398 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:46:35.957664   58398 node_conditions.go:123] node cpu capacity is 2
	I0520 11:46:35.957679   58398 node_conditions.go:105] duration metric: took 3.421729ms to run NodePressure ...
	I0520 11:46:35.957700   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:36.235314   58398 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:46:36.241075   58398 kubeadm.go:733] kubelet initialised
	I0520 11:46:36.241101   58398 kubeadm.go:734] duration metric: took 5.759765ms waiting for restarted kubelet to initialise ...
	I0520 11:46:36.241110   58398 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:36.248030   58398 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.254230   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.254263   58398 pod_ready.go:81] duration metric: took 6.20401ms for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.254274   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.254281   58398 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.259348   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "etcd-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.259379   58398 pod_ready.go:81] duration metric: took 5.088735ms for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.259392   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "etcd-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.259402   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.266963   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.266997   58398 pod_ready.go:81] duration metric: took 7.580054ms for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.267010   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.267020   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.346321   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.346353   58398 pod_ready.go:81] duration metric: took 79.314801ms for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.346363   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.346371   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.746645   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-proxy-6n9d5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.746674   58398 pod_ready.go:81] duration metric: took 400.291751ms for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.746686   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-proxy-6n9d5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.746692   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:37.146413   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.146443   58398 pod_ready.go:81] duration metric: took 399.744378ms for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:37.146454   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.146465   58398 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:37.545943   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.545966   58398 pod_ready.go:81] duration metric: took 399.491619ms for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:37.545974   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.545981   58398 pod_ready.go:38] duration metric: took 1.304861628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:37.545997   58398 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:46:37.558076   58398 ops.go:34] apiserver oom_adj: -16
	I0520 11:46:37.558097   58398 kubeadm.go:591] duration metric: took 9.045531679s to restartPrimaryControlPlane
	I0520 11:46:37.558106   58398 kubeadm.go:393] duration metric: took 9.095477048s to StartCluster
	I0520 11:46:37.558125   58398 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:37.558199   58398 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:37.559701   58398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:37.559920   58398 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:46:37.561751   58398 out.go:177] * Verifying Kubernetes components...
	I0520 11:46:37.560038   58398 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:46:37.560143   58398 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:37.563096   58398 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274976"
	I0520 11:46:37.563106   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:37.563124   58398 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274976"
	W0520 11:46:37.563133   58398 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:46:37.563156   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.563164   58398 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274976"
	I0520 11:46:37.563179   58398 addons.go:69] Setting metrics-server=true in profile "embed-certs-274976"
	I0520 11:46:37.563193   58398 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274976"
	I0520 11:46:37.563204   58398 addons.go:234] Setting addon metrics-server=true in "embed-certs-274976"
	W0520 11:46:37.563215   58398 addons.go:243] addon metrics-server should already be in state true
	I0520 11:46:37.563241   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.563516   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563558   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.563575   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563600   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.563614   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563638   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.577924   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0520 11:46:37.578096   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0520 11:46:37.578120   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0520 11:46:37.578322   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578480   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578527   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578851   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.578879   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.578982   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.579003   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.579019   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.579038   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.579202   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579342   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579364   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579385   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.579824   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.579867   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.579871   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.579895   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.582214   58398 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274976"
	W0520 11:46:37.582230   58398 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:46:37.582249   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.582511   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.582542   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.594308   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0520 11:46:37.594672   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.595047   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.595067   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.595525   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.595551   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0520 11:46:37.595709   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.596037   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.596545   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.596568   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.596980   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.597579   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.597599   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.597802   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.599963   58398 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:46:37.598457   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I0520 11:46:37.601485   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:46:37.601505   58398 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:46:37.601527   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.601793   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.602219   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.602238   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.602502   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.602688   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.604122   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.604188   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.605753   58398 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:37.604566   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.604874   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.607190   58398 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:37.607205   58398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:46:37.607221   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.607240   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.607263   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.607414   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.607729   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.610088   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.610462   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.610483   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.610738   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.610912   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.611072   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.611212   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.613655   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37269
	I0520 11:46:37.613938   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.614386   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.614403   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.614653   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.614826   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.616151   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.616331   58398 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:37.616345   58398 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:46:37.616362   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.619627   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.620050   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.620072   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.620214   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.620365   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.620480   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.620660   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.734680   58398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:37.751098   58398 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274976" to be "Ready" ...
	I0520 11:46:37.842846   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:46:37.842867   58398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:46:37.853894   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:37.862145   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:46:37.862167   58398 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:46:37.883656   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:37.884046   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:37.884061   58398 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:46:37.941086   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:38.187392   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.187415   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.187819   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.187841   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.187855   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.187861   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.187822   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.188095   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.188115   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.193695   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.193710   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.193926   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.193942   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.813454   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.813472   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.813794   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.813813   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.813821   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.813843   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.813894   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.814140   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.814154   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897046   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.897078   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.897406   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.897453   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.897469   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897494   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.897505   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.897755   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.897759   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.897772   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897784   58398 addons.go:470] Verifying addon metrics-server=true in "embed-certs-274976"
	I0520 11:46:38.899910   58398 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0520 11:46:36.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.467251   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.967153   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.967236   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.466627   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.966560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.466358   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.967165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:41.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.014993   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:38.015405   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:38.015458   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:38.015373   59511 retry.go:31] will retry after 4.505016985s: waiting for machine to come up
	I0520 11:46:38.901272   58398 addons.go:505] duration metric: took 1.341257876s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0520 11:46:39.754549   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:41.758717   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:43.865868   57519 start.go:364] duration metric: took 57.7314587s to acquireMachinesLock for "no-preload-814599"
	I0520 11:46:43.865920   57519 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:43.865931   57519 fix.go:54] fixHost starting: 
	I0520 11:46:43.866311   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:43.866347   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:43.885634   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0520 11:46:43.886068   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:43.886620   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:46:43.886659   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:43.887013   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:43.887200   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:46:43.887337   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:46:43.888902   57519 fix.go:112] recreateIfNeeded on no-preload-814599: state=Stopped err=<nil>
	I0520 11:46:43.888958   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	W0520 11:46:43.889108   57519 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:43.891052   57519 out.go:177] * Restarting existing kvm2 VM for "no-preload-814599" ...
	I0520 11:46:42.521630   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.522114   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Found IP for machine: 192.168.61.221
	I0520 11:46:42.522140   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Reserving static IP address...
	I0520 11:46:42.522157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has current primary IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.522534   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Reserved static IP address: 192.168.61.221
	I0520 11:46:42.522556   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for SSH to be available...
	I0520 11:46:42.522571   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-668445", mac: "52:54:00:55:ba:46", ip: "192.168.61.221"} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.522593   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | skip adding static IP to network mk-default-k8s-diff-port-668445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-668445", mac: "52:54:00:55:ba:46", ip: "192.168.61.221"}
	I0520 11:46:42.522613   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Getting to WaitForSSH function...
	I0520 11:46:42.524812   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.525199   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.525221   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.525362   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Using SSH client type: external
	I0520 11:46:42.525391   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa (-rw-------)
	I0520 11:46:42.525427   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:46:42.525443   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | About to run SSH command:
	I0520 11:46:42.525455   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | exit 0
	I0520 11:46:42.648689   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | SSH cmd err, output: <nil>: 
	I0520 11:46:42.649122   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetConfigRaw
	I0520 11:46:42.649745   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:42.652220   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.652571   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.652596   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.652821   58840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:46:42.653026   58840 machine.go:94] provisionDockerMachine start ...
	I0520 11:46:42.653046   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:42.653235   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.655453   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.655758   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.655777   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.655923   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.656089   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.656253   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.656384   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.656551   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.656729   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.656739   58840 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:46:42.761195   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:46:42.761225   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:42.761450   58840 buildroot.go:166] provisioning hostname "default-k8s-diff-port-668445"
	I0520 11:46:42.761477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:42.761666   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.764340   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.764670   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.764695   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.764801   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.764973   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.765131   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.765285   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.765472   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.765634   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.765647   58840 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-668445 && echo "default-k8s-diff-port-668445" | sudo tee /etc/hostname
	I0520 11:46:42.883179   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-668445
	
	I0520 11:46:42.883204   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.885913   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.886302   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.886347   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.886487   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.886699   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.886884   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.887040   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.887238   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.887419   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.887444   58840 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-668445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-668445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-668445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:46:43.001769   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:46:43.001792   58840 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:46:43.001825   58840 buildroot.go:174] setting up certificates
	I0520 11:46:43.001834   58840 provision.go:84] configureAuth start
	I0520 11:46:43.001842   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:43.002064   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:43.004606   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.004897   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.004944   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.005087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.006929   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.007270   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.007296   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.007386   58840 provision.go:143] copyHostCerts
	I0520 11:46:43.007461   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:46:43.007470   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:46:43.007523   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:46:43.007608   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:46:43.007617   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:46:43.007637   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:46:43.007686   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:46:43.007693   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:46:43.007709   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:46:43.007753   58840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-668445 san=[127.0.0.1 192.168.61.221 default-k8s-diff-port-668445 localhost minikube]
	I0520 11:46:43.165780   58840 provision.go:177] copyRemoteCerts
	I0520 11:46:43.165835   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:43.165860   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.168782   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.169136   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.169181   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.169343   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.169575   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.169717   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.169892   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.252175   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:43.280313   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0520 11:46:43.306901   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:43.331191   58840 provision.go:87] duration metric: took 329.344445ms to configureAuth
	I0520 11:46:43.331227   58840 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:43.331478   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:43.331576   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.334173   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.334573   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.334603   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.334770   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.334961   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.335108   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.335236   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.335381   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:43.335593   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:43.335615   58840 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:43.630180   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:43.630213   58840 machine.go:97] duration metric: took 977.172725ms to provisionDockerMachine
	I0520 11:46:43.630229   58840 start.go:293] postStartSetup for "default-k8s-diff-port-668445" (driver="kvm2")
	I0520 11:46:43.630244   58840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:43.630270   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.630597   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:43.630635   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.633307   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.633650   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.633680   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.633798   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.633976   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.634141   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.634264   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.719350   58840 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:43.723320   58840 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:43.723339   58840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:43.723396   58840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:43.723478   58840 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:43.723564   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:43.732422   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:43.757138   58840 start.go:296] duration metric: took 126.894206ms for postStartSetup
	I0520 11:46:43.757179   58840 fix.go:56] duration metric: took 21.619082215s for fixHost
	I0520 11:46:43.757202   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.759486   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.759860   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.759887   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.760065   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.760248   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.760425   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.760527   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.760654   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:43.760824   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:43.760839   58840 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:43.865691   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205603.841593335
	
	I0520 11:46:43.865713   58840 fix.go:216] guest clock: 1716205603.841593335
	I0520 11:46:43.865722   58840 fix.go:229] Guest: 2024-05-20 11:46:43.841593335 +0000 UTC Remote: 2024-05-20 11:46:43.757183475 +0000 UTC m=+161.848941663 (delta=84.40986ms)
	I0520 11:46:43.865777   58840 fix.go:200] guest clock delta is within tolerance: 84.40986ms
	I0520 11:46:43.865790   58840 start.go:83] releasing machines lock for "default-k8s-diff-port-668445", held for 21.727726024s
	I0520 11:46:43.865824   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.866087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:43.868681   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.869054   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.869076   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.869256   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.869792   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.869961   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.870035   58840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:43.870091   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.870174   58840 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:43.870195   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.872817   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873059   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873146   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.873253   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873487   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.873496   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.873537   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873684   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.873687   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.873861   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.873866   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.874033   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.874046   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.874234   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	W0520 11:46:43.973840   58840 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:43.973913   58840 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:43.980706   58840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:44.132625   58840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:44.140558   58840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:44.140629   58840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:44.158922   58840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:44.158946   58840 start.go:494] detecting cgroup driver to use...
	I0520 11:46:44.159006   58840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:44.179206   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:44.194183   58840 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:44.194254   58840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:44.208701   58840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:44.223086   58840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:44.342811   58840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:44.491358   58840 docker.go:233] disabling docker service ...
	I0520 11:46:44.491428   58840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:44.509165   58840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:44.526088   58840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:44.700452   58840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:44.833129   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:44.849090   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:44.870148   58840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:46:44.870222   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.880951   58840 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:44.881016   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.895038   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.908245   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.918844   58840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:44.929838   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.940446   58840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.960350   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.974587   58840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:44.985689   58840 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:44.985748   58840 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:44.999929   58840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:45.010732   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:45.140038   58840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:45.291656   58840 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:45.291728   58840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:45.297351   58840 start.go:562] Will wait 60s for crictl version
	I0520 11:46:45.297412   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:46:45.301297   58840 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:45.343635   58840 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:45.343703   58840 ssh_runner.go:195] Run: crio --version
	I0520 11:46:45.373914   58840 ssh_runner.go:195] Run: crio --version
	I0520 11:46:45.404596   58840 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:46:43.892523   57519 main.go:141] libmachine: (no-preload-814599) Calling .Start
	I0520 11:46:43.892686   57519 main.go:141] libmachine: (no-preload-814599) Ensuring networks are active...
	I0520 11:46:43.893370   57519 main.go:141] libmachine: (no-preload-814599) Ensuring network default is active
	I0520 11:46:43.893729   57519 main.go:141] libmachine: (no-preload-814599) Ensuring network mk-no-preload-814599 is active
	I0520 11:46:43.894105   57519 main.go:141] libmachine: (no-preload-814599) Getting domain xml...
	I0520 11:46:43.894847   57519 main.go:141] libmachine: (no-preload-814599) Creating domain...
	I0520 11:46:45.229992   57519 main.go:141] libmachine: (no-preload-814599) Waiting to get IP...
	I0520 11:46:45.230981   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.231451   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.231518   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.231438   59753 retry.go:31] will retry after 187.539159ms: waiting for machine to come up
	I0520 11:46:45.420854   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.421369   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.421393   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.421328   59753 retry.go:31] will retry after 327.709711ms: waiting for machine to come up
	I0520 11:46:45.750799   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.751384   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.751414   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.751334   59753 retry.go:31] will retry after 445.096849ms: waiting for machine to come up
	I0520 11:46:46.197899   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:46.198567   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:46.198598   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:46.198536   59753 retry.go:31] will retry after 561.029629ms: waiting for machine to come up
	I0520 11:46:41.967129   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.467212   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.966637   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.466403   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.966371   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.466743   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.967293   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.467205   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:46.466728   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.405909   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:45.408865   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:45.409399   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:45.409429   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:45.409655   58840 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:45.413815   58840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:45.426264   58840 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:45.426366   58840 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:46:45.426405   58840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:45.468938   58840 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:46:45.469021   58840 ssh_runner.go:195] Run: which lz4
	I0520 11:46:45.474957   58840 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:45.481006   58840 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:45.481039   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:46:44.255728   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:44.756897   58398 node_ready.go:49] node "embed-certs-274976" has status "Ready":"True"
	I0520 11:46:44.756963   58398 node_ready.go:38] duration metric: took 7.005828991s for node "embed-certs-274976" to be "Ready" ...
	I0520 11:46:44.756978   58398 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:44.767584   58398 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:44.774049   58398 pod_ready.go:92] pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:44.774078   58398 pod_ready.go:81] duration metric: took 6.468023ms for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:44.774099   58398 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.795658   58398 pod_ready.go:92] pod "etcd-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:46.795691   58398 pod_ready.go:81] duration metric: took 2.021582833s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.795705   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.810711   58398 pod_ready.go:92] pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:46.810739   58398 pod_ready.go:81] duration metric: took 15.025215ms for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.810751   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.760855   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:46.761534   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:46.761579   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:46.761484   59753 retry.go:31] will retry after 497.075791ms: waiting for machine to come up
	I0520 11:46:47.260059   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:47.260542   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:47.260580   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:47.260479   59753 retry.go:31] will retry after 910.980097ms: waiting for machine to come up
	I0520 11:46:48.173677   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:48.174161   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:48.174189   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:48.174108   59753 retry.go:31] will retry after 931.969789ms: waiting for machine to come up
	I0520 11:46:49.107672   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:49.108241   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:49.108271   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:49.108194   59753 retry.go:31] will retry after 1.075042352s: waiting for machine to come up
	I0520 11:46:50.184518   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:50.185078   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:50.185108   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:50.185025   59753 retry.go:31] will retry after 1.174646184s: waiting for machine to come up
	I0520 11:46:46.967320   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.466459   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.966641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.467404   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.966845   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.966601   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.466616   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.967389   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:51.466668   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.010637   58840 crio.go:462] duration metric: took 1.535717978s to copy over tarball
	I0520 11:46:47.010719   58840 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:49.308712   58840 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.297962166s)
	I0520 11:46:49.308747   58840 crio.go:469] duration metric: took 2.298080001s to extract the tarball
	I0520 11:46:49.308756   58840 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:49.346851   58840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:49.393090   58840 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:46:49.393118   58840 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:46:49.393126   58840 kubeadm.go:928] updating node { 192.168.61.221 8444 v1.30.1 crio true true} ...
	I0520 11:46:49.393247   58840 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-668445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:49.393326   58840 ssh_runner.go:195] Run: crio config
	I0520 11:46:49.438553   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:46:49.438575   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:49.438592   58840 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:49.438618   58840 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.221 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-668445 NodeName:default-k8s-diff-port-668445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:46:49.438792   58840 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-668445"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:49.438876   58840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:46:49.449415   58840 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:49.449487   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:49.460951   58840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0520 11:46:49.479654   58840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:49.497494   58840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0520 11:46:49.518474   58840 ssh_runner.go:195] Run: grep 192.168.61.221	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:49.522598   58840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:49.537237   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:49.686022   58840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:49.712850   58840 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445 for IP: 192.168.61.221
	I0520 11:46:49.712871   58840 certs.go:194] generating shared ca certs ...
	I0520 11:46:49.712885   58840 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:49.713065   58840 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:49.713110   58840 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:49.713123   58840 certs.go:256] generating profile certs ...
	I0520 11:46:49.713261   58840 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/client.key
	I0520 11:46:49.713346   58840 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.key.50005da2
	I0520 11:46:49.713390   58840 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.key
	I0520 11:46:49.713548   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:49.713583   58840 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:49.713597   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:49.713629   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:49.713656   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:49.713686   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:49.713744   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:49.714370   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:49.751367   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:49.787162   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:49.822660   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:49.863401   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 11:46:49.895295   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:49.928089   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:49.952687   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:46:49.982562   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:50.007234   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:50.031099   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:50.055146   58840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:50.073066   58840 ssh_runner.go:195] Run: openssl version
	I0520 11:46:50.080951   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:50.092134   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.097436   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.097496   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.103714   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:50.114321   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:50.125104   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.131225   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.131276   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.139113   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:50.153959   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:50.165441   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.170302   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.170349   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.176388   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:50.188372   58840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:50.193168   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:50.199298   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:50.208100   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:50.214704   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:50.220565   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:50.226463   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:50.232400   58840 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:50.232472   58840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:50.232532   58840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:50.271977   58840 cri.go:89] found id: ""
	I0520 11:46:50.272070   58840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:50.283676   58840 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:50.283699   58840 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:50.283706   58840 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:50.283753   58840 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:50.296808   58840 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:50.298406   58840 kubeconfig.go:125] found "default-k8s-diff-port-668445" server: "https://192.168.61.221:8444"
	I0520 11:46:50.300563   58840 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:50.310940   58840 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.221
	I0520 11:46:50.310964   58840 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:50.310975   58840 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:50.311010   58840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:50.361321   58840 cri.go:89] found id: ""
	I0520 11:46:50.361400   58840 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:50.382699   58840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:50.397635   58840 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:50.397657   58840 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:50.397707   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0520 11:46:50.407303   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:50.407364   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:50.418331   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0520 11:46:50.427889   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:50.427938   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:50.437649   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0520 11:46:50.446810   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:50.446861   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:50.456428   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0520 11:46:50.465444   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:50.465512   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:50.478585   58840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:50.489053   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:50.607045   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.346500   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.594241   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.661179   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.751253   58840 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:51.751331   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.318166   58398 pod_ready.go:92] pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.318193   58398 pod_ready.go:81] duration metric: took 1.507434439s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.318207   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.323814   58398 pod_ready.go:92] pod "kube-proxy-6n9d5" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.323850   58398 pod_ready.go:81] duration metric: took 5.632938ms for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.323865   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.356208   58398 pod_ready.go:92] pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.356235   58398 pod_ready.go:81] duration metric: took 32.360735ms for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.356247   58398 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:50.363910   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:52.364530   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:51.360905   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:51.482451   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:51.482493   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:51.361293   59753 retry.go:31] will retry after 1.455979778s: waiting for machine to come up
	I0520 11:46:52.818580   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:52.819134   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:52.819162   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:52.819073   59753 retry.go:31] will retry after 2.771639261s: waiting for machine to come up
	I0520 11:46:55.593581   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:55.594138   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:55.594167   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:55.594089   59753 retry.go:31] will retry after 2.881341208s: waiting for machine to come up
	I0520 11:46:51.967150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.466994   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.967190   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.467334   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.967039   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.466506   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.966828   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.967263   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:56.466949   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.252101   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.752498   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.794795   58840 api_server.go:72] duration metric: took 1.043540264s to wait for apiserver process to appear ...
	I0520 11:46:52.794823   58840 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:46:52.794847   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:52.795569   58840 api_server.go:269] stopped: https://192.168.61.221:8444/healthz: Get "https://192.168.61.221:8444/healthz": dial tcp 192.168.61.221:8444: connect: connection refused
	I0520 11:46:53.294972   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.015833   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:56.015872   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:56.015889   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.053674   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:56.053705   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:56.294988   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.300440   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:56.300474   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:56.795720   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.813192   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:56.813221   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:57.295045   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:57.298962   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:57.298994   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:57.795590   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:57.800795   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 200:
	ok
	I0520 11:46:57.809060   58840 api_server.go:141] control plane version: v1.30.1
	I0520 11:46:57.809084   58840 api_server.go:131] duration metric: took 5.014253865s to wait for apiserver health ...
	I0520 11:46:57.809092   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:46:57.809098   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:57.811020   58840 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:46:54.864263   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:57.363797   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:57.812202   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:46:57.824260   58840 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:46:57.843189   58840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:46:57.851978   58840 system_pods.go:59] 8 kube-system pods found
	I0520 11:46:57.852012   58840 system_pods.go:61] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:46:57.852029   58840 system_pods.go:61] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:46:57.852035   58840 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:46:57.852042   58840 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:46:57.852046   58840 system_pods.go:61] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:46:57.852055   58840 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:46:57.852063   58840 system_pods.go:61] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:46:57.852067   58840 system_pods.go:61] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:46:57.852073   58840 system_pods.go:74] duration metric: took 8.872046ms to wait for pod list to return data ...
	I0520 11:46:57.852084   58840 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:46:57.855020   58840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:46:57.855043   58840 node_conditions.go:123] node cpu capacity is 2
	I0520 11:46:57.855052   58840 node_conditions.go:105] duration metric: took 2.95984ms to run NodePressure ...
	I0520 11:46:57.855065   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:58.124410   58840 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:46:58.129201   58840 kubeadm.go:733] kubelet initialised
	I0520 11:46:58.129222   58840 kubeadm.go:734] duration metric: took 4.789292ms waiting for restarted kubelet to initialise ...
	I0520 11:46:58.129229   58840 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:58.134883   58840 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.139522   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.139557   58840 pod_ready.go:81] duration metric: took 4.638772ms for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.139569   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.139583   58840 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.143636   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.143657   58840 pod_ready.go:81] duration metric: took 4.061011ms for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.143669   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.143677   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.147479   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.147499   58840 pod_ready.go:81] duration metric: took 3.813494ms for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.147510   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.147517   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.247677   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.247710   58840 pod_ready.go:81] duration metric: took 100.183226ms for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.247723   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.247730   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.648289   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-proxy-7ddtk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.648339   58840 pod_ready.go:81] duration metric: took 400.599602ms for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.648352   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-proxy-7ddtk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.648365   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:59.047071   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.047099   58840 pod_ready.go:81] duration metric: took 398.721478ms for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:59.047108   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.047114   58840 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:59.447840   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.447866   58840 pod_ready.go:81] duration metric: took 400.738125ms for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:59.447876   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.447885   58840 pod_ready.go:38] duration metric: took 1.318647527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:59.447899   58840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:46:59.459985   58840 ops.go:34] apiserver oom_adj: -16
	I0520 11:46:59.460011   58840 kubeadm.go:591] duration metric: took 9.176298972s to restartPrimaryControlPlane
	I0520 11:46:59.460023   58840 kubeadm.go:393] duration metric: took 9.227629245s to StartCluster
	I0520 11:46:59.460038   58840 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:59.460120   58840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:59.461897   58840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:59.462172   58840 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:46:59.464109   58840 out.go:177] * Verifying Kubernetes components...
	I0520 11:46:59.462263   58840 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:46:59.462375   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:59.465582   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:59.464212   58840 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465645   58840 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.465657   58840 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:46:59.465693   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.464218   58840 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465752   58840 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.465768   58840 addons.go:243] addon metrics-server should already be in state true
	I0520 11:46:59.465794   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.464219   58840 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465877   58840 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-668445"
	I0520 11:46:59.466087   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466116   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.466210   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466216   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466232   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.466235   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.481963   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0520 11:46:59.482030   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45203
	I0520 11:46:59.482551   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.482768   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.483115   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.483140   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.483495   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.483516   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.483759   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.483930   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.483979   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.484484   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.484509   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.485272   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0520 11:46:59.485657   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.486686   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.486716   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.487080   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.487552   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.487588   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.487989   58840 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.488007   58840 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:46:59.488037   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.488349   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.488384   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.500796   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0520 11:46:59.501376   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.501793   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0520 11:46:59.502058   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.502098   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.502308   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.502439   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.502661   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.502737   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0520 11:46:59.502976   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.503009   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.503199   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.503329   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.503548   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.503648   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.503659   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.504278   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.504632   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.504814   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.506755   58840 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:46:59.504875   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.505278   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.508201   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:46:59.508219   58840 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:46:59.508238   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.509465   58840 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:59.510750   58840 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:59.510765   58840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:46:59.510778   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.511023   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.511444   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.511480   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.511707   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.511939   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.512117   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.512271   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.513612   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.513980   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.514009   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.514207   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.514354   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.514460   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.514564   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.523003   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I0520 11:46:59.523361   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.523835   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.523858   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.524115   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.524266   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.525489   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.525661   58840 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:59.525673   58840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:46:59.525685   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.527762   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.528064   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.528087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.528157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.528309   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.528414   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.528559   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.652795   58840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:59.672785   58840 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-668445" to be "Ready" ...
	I0520 11:46:59.759683   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:59.760767   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:59.775512   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:46:59.775536   58840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:46:59.807608   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:46:59.807633   58840 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:46:59.835597   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:59.835624   58840 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:46:59.862717   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:47:00.145002   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.145032   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.145352   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.145352   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.145372   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.145380   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.145389   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.145645   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.145700   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.145717   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.150992   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.151008   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.151358   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.151376   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.151385   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.802987   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803016   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803030   58840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.042235492s)
	I0520 11:47:00.803063   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803081   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803354   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.803386   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803404   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803422   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803424   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803434   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803439   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803449   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803464   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803621   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803631   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803642   58840 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-668445"
	I0520 11:47:00.803677   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.803729   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803743   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.805628   58840 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0520 11:46:58.478246   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:58.478733   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:58.478765   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:58.478670   59753 retry.go:31] will retry after 3.220647355s: waiting for machine to come up
	I0520 11:46:56.966878   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.966485   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.466861   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.967023   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.467329   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.966889   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.466799   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.966521   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:01.466922   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.806818   58840 addons.go:505] duration metric: took 1.344560138s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0520 11:47:01.675861   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.863455   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:01.863859   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:01.702053   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.702503   57519 main.go:141] libmachine: (no-preload-814599) Found IP for machine: 192.168.39.185
	I0520 11:47:01.702519   57519 main.go:141] libmachine: (no-preload-814599) Reserving static IP address...
	I0520 11:47:01.702532   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has current primary IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.702964   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.702994   57519 main.go:141] libmachine: (no-preload-814599) Reserved static IP address: 192.168.39.185
	I0520 11:47:01.703017   57519 main.go:141] libmachine: (no-preload-814599) DBG | skip adding static IP to network mk-no-preload-814599 - found existing host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"}
	I0520 11:47:01.703036   57519 main.go:141] libmachine: (no-preload-814599) DBG | Getting to WaitForSSH function...
	I0520 11:47:01.703048   57519 main.go:141] libmachine: (no-preload-814599) Waiting for SSH to be available...
	I0520 11:47:01.705097   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.705376   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.705409   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.705482   57519 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH client type: external
	I0520 11:47:01.705511   57519 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa (-rw-------)
	I0520 11:47:01.705543   57519 main.go:141] libmachine: (no-preload-814599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:47:01.705558   57519 main.go:141] libmachine: (no-preload-814599) DBG | About to run SSH command:
	I0520 11:47:01.705593   57519 main.go:141] libmachine: (no-preload-814599) DBG | exit 0
	I0520 11:47:01.833182   57519 main.go:141] libmachine: (no-preload-814599) DBG | SSH cmd err, output: <nil>: 
	I0520 11:47:01.833551   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetConfigRaw
	I0520 11:47:01.834149   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:01.836532   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.837008   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.837047   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.837310   57519 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json ...
	I0520 11:47:01.837509   57519 machine.go:94] provisionDockerMachine start ...
	I0520 11:47:01.837527   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:01.837717   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:01.839814   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.840151   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.840171   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.840335   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:01.840485   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.840631   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.840781   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:01.840958   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:01.841138   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:01.841149   57519 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:47:01.953190   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:47:01.953221   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:01.953480   57519 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:47:01.953504   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:01.953667   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:01.956155   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.956487   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.956518   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.956684   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:01.956884   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.957071   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.957239   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:01.957416   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:01.957580   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:01.957593   57519 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-814599 && echo "no-preload-814599" | sudo tee /etc/hostname
	I0520 11:47:02.079993   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-814599
	
	I0520 11:47:02.080018   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.082623   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.082950   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.082984   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.083154   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.083353   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.083520   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.083677   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.083852   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:02.084014   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:02.084029   57519 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-814599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-814599/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-814599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:47:02.204190   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:47:02.204225   57519 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:47:02.204252   57519 buildroot.go:174] setting up certificates
	I0520 11:47:02.204264   57519 provision.go:84] configureAuth start
	I0520 11:47:02.204280   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:02.204562   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:02.207132   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.207477   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.207501   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.207633   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.209905   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.210212   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.210231   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.210414   57519 provision.go:143] copyHostCerts
	I0520 11:47:02.210472   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:47:02.210493   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:47:02.210560   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:47:02.210743   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:47:02.210758   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:47:02.210795   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:47:02.210875   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:47:02.210883   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:47:02.210905   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:47:02.210962   57519 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.no-preload-814599 san=[127.0.0.1 192.168.39.185 localhost minikube no-preload-814599]
	I0520 11:47:02.411383   57519 provision.go:177] copyRemoteCerts
	I0520 11:47:02.411436   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:47:02.411458   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.414071   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.414443   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.414473   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.414675   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.414859   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.415006   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.415125   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:02.507148   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:47:02.536944   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0520 11:47:02.562291   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:47:02.589034   57519 provision.go:87] duration metric: took 384.752486ms to configureAuth
	I0520 11:47:02.589092   57519 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:47:02.589320   57519 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:47:02.589412   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.592287   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.592704   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.592731   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.592957   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.593180   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.593368   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.593561   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.593724   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:02.593910   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:02.593931   57519 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:47:02.899943   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:47:02.900037   57519 machine.go:97] duration metric: took 1.062511085s to provisionDockerMachine
	I0520 11:47:02.900074   57519 start.go:293] postStartSetup for "no-preload-814599" (driver="kvm2")
	I0520 11:47:02.900112   57519 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:47:02.900155   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:02.900546   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:47:02.900581   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.903643   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.904057   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.904089   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.904253   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.904433   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.904580   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.904710   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:02.999752   57519 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:47:03.005832   57519 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:47:03.005856   57519 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:47:03.005940   57519 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:47:03.006029   57519 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:47:03.006145   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:47:03.016853   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:47:03.049845   57519 start.go:296] duration metric: took 149.725ms for postStartSetup
	I0520 11:47:03.049886   57519 fix.go:56] duration metric: took 19.183954619s for fixHost
	I0520 11:47:03.049912   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.052841   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.053175   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.053203   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.053342   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.053537   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.053730   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.053899   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.054048   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:03.054227   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:03.054241   57519 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:47:03.166461   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205623.121199957
	
	I0520 11:47:03.166490   57519 fix.go:216] guest clock: 1716205623.121199957
	I0520 11:47:03.166500   57519 fix.go:229] Guest: 2024-05-20 11:47:03.121199957 +0000 UTC Remote: 2024-05-20 11:47:03.049891315 +0000 UTC m=+366.742654246 (delta=71.308642ms)
	I0520 11:47:03.166523   57519 fix.go:200] guest clock delta is within tolerance: 71.308642ms
	I0520 11:47:03.166530   57519 start.go:83] releasing machines lock for "no-preload-814599", held for 19.30063132s
	I0520 11:47:03.166554   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.166858   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:03.169486   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.169816   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.169861   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.169990   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170546   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170728   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170814   57519 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:47:03.170875   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.170920   57519 ssh_runner.go:195] Run: cat /version.json
	I0520 11:47:03.170944   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.173717   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.173944   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174128   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.174153   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174338   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.174365   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174371   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.174570   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.174571   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.174781   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.174793   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.174956   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.175049   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:03.175169   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	W0520 11:47:03.276456   57519 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:47:03.276537   57519 ssh_runner.go:195] Run: systemctl --version
	I0520 11:47:03.282946   57519 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:47:03.428948   57519 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:47:03.436462   57519 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:47:03.436529   57519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:47:03.453022   57519 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:47:03.453045   57519 start.go:494] detecting cgroup driver to use...
	I0520 11:47:03.453103   57519 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:47:03.469505   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:47:03.483617   57519 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:47:03.483673   57519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:47:03.498514   57519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:47:03.513169   57519 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:47:03.638355   57519 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:47:03.819407   57519 docker.go:233] disabling docker service ...
	I0520 11:47:03.819473   57519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:47:03.837928   57519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:47:03.853747   57519 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:47:03.976480   57519 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:47:04.097685   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:47:04.112917   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:47:04.132979   57519 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:47:04.133044   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.145501   57519 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:47:04.145594   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.157957   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.168774   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.180521   57519 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:47:04.191739   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.203232   57519 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.221386   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.231999   57519 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:47:04.243401   57519 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:47:04.243447   57519 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:47:04.259232   57519 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:47:04.270945   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:47:04.387908   57519 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:47:04.526507   57519 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:47:04.526588   57519 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:47:04.531568   57519 start.go:562] Will wait 60s for crictl version
	I0520 11:47:04.531628   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:04.535751   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:47:04.583384   57519 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:47:04.583470   57519 ssh_runner.go:195] Run: crio --version
	I0520 11:47:04.617545   57519 ssh_runner.go:195] Run: crio --version
	I0520 11:47:04.652685   57519 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:47:04.654199   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:04.657274   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:04.657624   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:04.657653   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:04.657858   57519 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 11:47:04.662344   57519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:47:04.676310   57519 kubeadm.go:877] updating cluster {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:47:04.676437   57519 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:47:04.676480   57519 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:47:04.723427   57519 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:47:04.723452   57519 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:47:04.723502   57519 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:04.723705   57519 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:04.723729   57519 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:04.723773   57519 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:04.723829   57519 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:04.723896   57519 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:04.723913   57519 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 11:47:04.723767   57519 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 11:47:04.725657   57519 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:04.725657   57519 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:04.725737   57519 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:04.725630   57519 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:04.725696   57519 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.849687   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.896118   57519 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0520 11:47:04.896166   57519 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.896214   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:04.901457   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.940481   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 11:47:04.940581   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.946097   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0520 11:47:04.946120   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.946168   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.958539   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:05.058882   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:05.059155   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:05.061672   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0520 11:47:05.082727   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:05.087840   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:05.606193   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:01.966435   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.467352   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.967335   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.467049   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.466715   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.966960   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.466641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.966993   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:06.466684   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.677231   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:47:06.176977   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:47:06.677184   58840 node_ready.go:49] node "default-k8s-diff-port-668445" has status "Ready":"True"
	I0520 11:47:06.677208   58840 node_ready.go:38] duration metric: took 7.004390787s for node "default-k8s-diff-port-668445" to be "Ready" ...
	I0520 11:47:06.677217   58840 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:47:06.684890   58840 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.690323   58840 pod_ready.go:92] pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.690345   58840 pod_ready.go:81] duration metric: took 5.420403ms for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.690354   58840 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.695944   58840 pod_ready.go:92] pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.695968   58840 pod_ready.go:81] duration metric: took 5.606311ms for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.695979   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.701932   58840 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.701953   58840 pod_ready.go:81] duration metric: took 5.965003ms for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.701966   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:04.363354   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:06.363448   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:08.125249   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1: (3.166660678s)
	I0520 11:47:08.125272   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (3.179080546s)
	I0520 11:47:08.125294   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0520 11:47:08.125314   57519 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0520 11:47:08.125343   57519 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:08.125346   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1: (3.066428315s)
	I0520 11:47:08.125383   57519 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0520 11:47:08.125398   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125415   57519 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:08.125415   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (3.066238572s)
	I0520 11:47:08.125472   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125517   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9: (3.063820229s)
	I0520 11:47:08.125478   57519 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0520 11:47:08.125582   57519 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:08.125596   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1: (3.04284016s)
	I0520 11:47:08.125612   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125637   57519 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0520 11:47:08.125659   57519 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:08.125668   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1: (3.037804525s)
	I0520 11:47:08.125696   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125707   57519 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0520 11:47:08.125709   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.519483497s)
	I0520 11:47:08.125727   57519 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:08.125758   57519 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0520 11:47:08.125775   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125808   57519 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:08.125843   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.140377   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:08.140435   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:08.140472   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:08.140518   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:08.140550   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:08.140604   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:08.277671   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 11:47:08.277785   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.286159   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 11:47:08.286163   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 11:47:08.286261   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:08.286297   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:08.287554   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0520 11:47:08.287623   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:08.300162   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 11:47:08.300183   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0520 11:47:08.300197   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.300203   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0520 11:47:08.300221   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0520 11:47:08.300235   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0520 11:47:08.300168   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 11:47:08.300245   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.300254   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:08.300315   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:08.304669   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0520 11:47:10.580290   57519 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.279947927s)
	I0520 11:47:10.580392   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0520 11:47:10.580421   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.280153203s)
	I0520 11:47:10.580444   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0520 11:47:10.580466   57519 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:10.580514   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:06.966995   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.467261   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.967171   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.467104   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.967112   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.466446   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.967351   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:10.467225   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:10.467292   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:10.507451   58316 cri.go:89] found id: ""
	I0520 11:47:10.507477   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.507487   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:10.507494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:10.507557   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:10.548291   58316 cri.go:89] found id: ""
	I0520 11:47:10.548317   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.548329   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:10.548337   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:10.548401   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:10.592870   58316 cri.go:89] found id: ""
	I0520 11:47:10.592899   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.592909   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:10.592916   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:10.592997   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:10.637344   58316 cri.go:89] found id: ""
	I0520 11:47:10.637376   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.637387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:10.637395   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:10.637466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:10.679759   58316 cri.go:89] found id: ""
	I0520 11:47:10.679794   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.679802   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:10.679807   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:10.679868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:10.720179   58316 cri.go:89] found id: ""
	I0520 11:47:10.720215   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.720250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:10.720259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:10.720324   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:10.761279   58316 cri.go:89] found id: ""
	I0520 11:47:10.761308   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.761320   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:10.761327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:10.761383   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:10.799008   58316 cri.go:89] found id: ""
	I0520 11:47:10.799037   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.799048   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:10.799059   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:10.799072   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:10.850507   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:10.850537   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:10.867287   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:10.867317   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:11.007373   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:11.007401   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:11.007420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:11.090037   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:11.090074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:07.717364   58840 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:07.717391   58840 pod_ready.go:81] duration metric: took 1.01541621s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:07.717404   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:08.041632   58840 pod_ready.go:92] pod "kube-proxy-7ddtk" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:08.041657   58840 pod_ready.go:81] duration metric: took 324.24633ms for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:08.041669   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:10.047877   58840 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:08.364366   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:10.366662   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:12.895005   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:11.650206   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.069627914s)
	I0520 11:47:11.650241   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 11:47:11.650265   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:11.650322   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:13.011174   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.360821671s)
	I0520 11:47:13.011203   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0520 11:47:13.011225   57519 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:13.011270   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:13.633150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:13.650425   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:13.650494   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:13.690842   58316 cri.go:89] found id: ""
	I0520 11:47:13.690869   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.690880   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:13.690887   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:13.690947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:13.727462   58316 cri.go:89] found id: ""
	I0520 11:47:13.727492   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.727504   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:13.727511   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:13.727570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:13.765767   58316 cri.go:89] found id: ""
	I0520 11:47:13.765797   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.765807   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:13.765816   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:13.765877   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:13.800210   58316 cri.go:89] found id: ""
	I0520 11:47:13.800239   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.800248   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:13.800255   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:13.800316   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:13.836481   58316 cri.go:89] found id: ""
	I0520 11:47:13.836513   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.836525   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:13.836533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:13.836601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:13.881888   58316 cri.go:89] found id: ""
	I0520 11:47:13.881914   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.881923   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:13.881932   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:13.881990   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:13.916896   58316 cri.go:89] found id: ""
	I0520 11:47:13.916955   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.916966   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:13.916973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:13.917037   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:13.950345   58316 cri.go:89] found id: ""
	I0520 11:47:13.950372   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.950381   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:13.950391   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:13.950415   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:14.039076   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:14.039114   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:14.086767   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:14.086794   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:14.140377   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:14.140418   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:14.158036   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:14.158074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:14.255195   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:12.048869   58840 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:12.048894   58840 pod_ready.go:81] duration metric: took 4.007218109s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:12.048903   58840 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:14.057711   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:16.555471   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:15.365437   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:17.864290   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:16.885885   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.874589428s)
	I0520 11:47:16.885917   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0520 11:47:16.885952   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:16.886034   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:18.674100   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.788037677s)
	I0520 11:47:18.674128   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0520 11:47:18.674149   57519 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:18.674197   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:20.543671   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.86944462s)
	I0520 11:47:20.543705   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0520 11:47:20.543733   57519 cache_images.go:123] Successfully loaded all cached images
	I0520 11:47:20.543739   57519 cache_images.go:92] duration metric: took 15.820273224s to LoadCachedImages
	I0520 11:47:20.543749   57519 kubeadm.go:928] updating node { 192.168.39.185 8443 v1.30.1 crio true true} ...
	I0520 11:47:20.543863   57519 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-814599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:47:20.543944   57519 ssh_runner.go:195] Run: crio config
	I0520 11:47:20.593647   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:47:20.593672   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:47:20.593692   57519 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:47:20.593719   57519 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-814599 NodeName:no-preload-814599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:47:20.593863   57519 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-814599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:47:20.593936   57519 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:47:20.604739   57519 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:47:20.604791   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:47:20.614029   57519 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 11:47:20.630587   57519 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:47:20.647296   57519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0520 11:47:20.664453   57519 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0520 11:47:20.668296   57519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:47:20.681086   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:47:20.816542   57519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:47:20.834344   57519 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599 for IP: 192.168.39.185
	I0520 11:47:20.834369   57519 certs.go:194] generating shared ca certs ...
	I0520 11:47:20.834388   57519 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:47:20.834578   57519 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:47:20.834634   57519 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:47:20.834649   57519 certs.go:256] generating profile certs ...
	I0520 11:47:20.834777   57519 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.key
	I0520 11:47:20.834876   57519 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5
	I0520 11:47:20.834928   57519 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key
	I0520 11:47:20.835090   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:47:20.835146   57519 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:47:20.835159   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:47:20.835192   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:47:20.835226   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:47:20.835255   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:47:20.835314   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:47:20.836325   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:47:20.883227   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:47:20.918566   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:47:20.959097   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:47:21.006422   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 11:47:21.035717   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:47:21.071087   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:47:21.094330   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:47:21.119597   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:47:21.145488   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:47:21.173152   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:47:21.197627   57519 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:47:21.214595   57519 ssh_runner.go:195] Run: openssl version
	I0520 11:47:21.220678   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:47:21.232251   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.236953   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.237015   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.243048   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:47:21.253667   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:47:21.264223   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.268607   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.268648   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.274112   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:47:21.284427   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:47:21.294811   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.299403   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.299455   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.304954   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:47:21.315320   57519 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:47:21.319956   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:47:21.325692   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:47:21.331232   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:47:21.337240   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:47:16.755906   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:16.772659   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:16.772736   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:16.821927   58316 cri.go:89] found id: ""
	I0520 11:47:16.821954   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.821964   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:16.821971   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:16.822033   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:16.863636   58316 cri.go:89] found id: ""
	I0520 11:47:16.863663   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.863673   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:16.863681   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:16.863749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:16.900524   58316 cri.go:89] found id: ""
	I0520 11:47:16.900553   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.900561   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:16.900566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:16.900625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:16.949484   58316 cri.go:89] found id: ""
	I0520 11:47:16.949517   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.949527   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:16.949535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:16.949597   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:16.994006   58316 cri.go:89] found id: ""
	I0520 11:47:16.994035   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.994045   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:16.994054   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:16.994111   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:17.030100   58316 cri.go:89] found id: ""
	I0520 11:47:17.030130   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.030141   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:17.030147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:17.030195   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:17.069067   58316 cri.go:89] found id: ""
	I0520 11:47:17.069089   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.069097   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:17.069102   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:17.069146   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:17.105102   58316 cri.go:89] found id: ""
	I0520 11:47:17.105131   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.105141   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:17.105151   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:17.105165   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:17.159912   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:17.159937   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:17.212192   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:17.212227   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:17.227761   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:17.227802   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:17.305724   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:17.305753   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:17.305770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:19.886189   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:19.900397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:19.900475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:19.940473   58316 cri.go:89] found id: ""
	I0520 11:47:19.940502   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.940509   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:19.940533   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:19.940582   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:19.985561   58316 cri.go:89] found id: ""
	I0520 11:47:19.985594   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.985605   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:19.985611   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:19.985676   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:20.040865   58316 cri.go:89] found id: ""
	I0520 11:47:20.040895   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.040906   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:20.040913   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:20.040991   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:20.086122   58316 cri.go:89] found id: ""
	I0520 11:47:20.086149   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.086159   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:20.086167   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:20.086228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:20.122905   58316 cri.go:89] found id: ""
	I0520 11:47:20.122931   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.122938   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:20.122943   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:20.123055   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:20.165966   58316 cri.go:89] found id: ""
	I0520 11:47:20.165993   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.166004   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:20.166012   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:20.166064   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:20.206010   58316 cri.go:89] found id: ""
	I0520 11:47:20.206043   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.206054   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:20.206062   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:20.206126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:20.243540   58316 cri.go:89] found id: ""
	I0520 11:47:20.243565   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.243573   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:20.243582   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:20.243595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:20.327278   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:20.327298   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:20.327313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:20.423064   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:20.423096   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:20.467707   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:20.467742   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:20.522746   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:20.522782   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:18.556589   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:20.557476   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:20.363536   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:22.862952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:21.342881   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:47:21.349172   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:47:21.354721   57519 kubeadm.go:391] StartCluster: {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:47:21.354839   57519 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:47:21.354923   57519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:47:21.399473   57519 cri.go:89] found id: ""
	I0520 11:47:21.399554   57519 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:47:21.413632   57519 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:47:21.413656   57519 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:47:21.413662   57519 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:47:21.413712   57519 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:47:21.425556   57519 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:47:21.426515   57519 kubeconfig.go:125] found "no-preload-814599" server: "https://192.168.39.185:8443"
	I0520 11:47:21.428556   57519 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:47:21.438628   57519 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.185
	I0520 11:47:21.438655   57519 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:47:21.438668   57519 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:47:21.438708   57519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:47:21.485080   57519 cri.go:89] found id: ""
	I0520 11:47:21.485149   57519 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:47:21.502591   57519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:47:21.512148   57519 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:47:21.512168   57519 kubeadm.go:156] found existing configuration files:
	
	I0520 11:47:21.512208   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:47:21.521378   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:47:21.521434   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:47:21.531568   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:47:21.541404   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:47:21.541459   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:47:21.552018   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:47:21.562667   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:47:21.562711   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:47:21.572240   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:47:21.581472   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:47:21.581535   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:47:21.591072   57519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:47:21.601114   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:21.716435   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.597273   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.792224   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.866757   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.999294   57519 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:47:22.999389   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.499642   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:24.000424   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:24.022646   57519 api_server.go:72] duration metric: took 1.023349447s to wait for apiserver process to appear ...
	I0520 11:47:24.022673   57519 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:47:24.022691   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:24.023162   57519 api_server.go:269] stopped: https://192.168.39.185:8443/healthz: Get "https://192.168.39.185:8443/healthz": dial tcp 192.168.39.185:8443: connect: connection refused
	I0520 11:47:24.523760   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:23.037165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.051438   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:23.051490   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:23.088156   58316 cri.go:89] found id: ""
	I0520 11:47:23.088186   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.088197   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:23.088204   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:23.088266   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:23.123357   58316 cri.go:89] found id: ""
	I0520 11:47:23.123384   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.123394   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:23.123401   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:23.123461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:23.159579   58316 cri.go:89] found id: ""
	I0520 11:47:23.159612   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.159622   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:23.159630   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:23.159694   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:23.195471   58316 cri.go:89] found id: ""
	I0520 11:47:23.195507   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.195518   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:23.195525   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:23.195584   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:23.231121   58316 cri.go:89] found id: ""
	I0520 11:47:23.231148   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.231159   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:23.231165   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:23.231225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:23.267809   58316 cri.go:89] found id: ""
	I0520 11:47:23.267833   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.267841   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:23.267847   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:23.267897   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:23.306589   58316 cri.go:89] found id: ""
	I0520 11:47:23.306608   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.306615   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:23.306621   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:23.306678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:23.339285   58316 cri.go:89] found id: ""
	I0520 11:47:23.339308   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.339317   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:23.339329   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:23.339343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:23.406157   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:23.406196   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:23.420874   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:23.420911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:23.503049   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:23.503077   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:23.503092   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:23.585189   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:23.585222   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:26.130594   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:26.146278   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:26.146366   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:26.189721   58316 cri.go:89] found id: ""
	I0520 11:47:26.189747   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.189754   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:26.189760   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:26.189809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:26.228060   58316 cri.go:89] found id: ""
	I0520 11:47:26.228084   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.228098   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:26.228104   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:26.228171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:26.262360   58316 cri.go:89] found id: ""
	I0520 11:47:26.262390   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.262400   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:26.262407   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:26.262471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:26.297981   58316 cri.go:89] found id: ""
	I0520 11:47:26.298010   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.298022   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:26.298029   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:26.298094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:26.339873   58316 cri.go:89] found id: ""
	I0520 11:47:26.339901   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.339911   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:26.339918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:26.339983   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:26.376366   58316 cri.go:89] found id: ""
	I0520 11:47:26.376393   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.376404   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:26.376411   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:26.376471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:26.410803   58316 cri.go:89] found id: ""
	I0520 11:47:26.410839   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.410851   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:26.410859   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:26.410921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:26.445392   58316 cri.go:89] found id: ""
	I0520 11:47:26.445420   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.445432   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:26.445443   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:26.445458   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:26.499244   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:26.499276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:26.513224   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:26.513259   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:26.589623   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:26.589647   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:26.589661   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:26.676875   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:26.676918   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:23.055887   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:25.555044   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:24.863476   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:27.363148   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:27.158548   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:47:27.158572   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:47:27.158585   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:27.205830   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:47:27.205867   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:47:27.523283   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:27.527618   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:27.527643   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:28.023238   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:28.029601   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:28.029630   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:28.522849   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:28.528350   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:28.528372   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:29.023422   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:29.027460   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:29.027493   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:29.523739   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:29.528289   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:29.528317   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:30.022859   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:30.027324   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:30.027348   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:30.522929   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:30.527596   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:47:30.534571   57519 api_server.go:141] control plane version: v1.30.1
	I0520 11:47:30.534592   57519 api_server.go:131] duration metric: took 6.511912744s to wait for apiserver health ...
	I0520 11:47:30.534601   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:47:30.534609   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:47:30.536647   57519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:47:30.537896   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:47:30.548779   57519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:47:30.568124   57519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:47:30.577489   57519 system_pods.go:59] 8 kube-system pods found
	I0520 11:47:30.577522   57519 system_pods.go:61] "coredns-7db6d8ff4d-tg5bl" [ec723a9c-933c-4d25-9b56-e39a78adbbb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:47:30.577530   57519 system_pods.go:61] "etcd-no-preload-814599" [d0e5ca15-cc81-4dc2-9bc4-0e9b88c7b49d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:47:30.577538   57519 system_pods.go:61] "kube-apiserver-no-preload-814599" [81f50182-5190-4168-b97a-f5a7a6348247] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:47:30.577544   57519 system_pods.go:61] "kube-controller-manager-no-preload-814599" [871504b1-6e24-498b-9563-241b7a7422d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:47:30.577550   57519 system_pods.go:61] "kube-proxy-mqg49" [0f3b3b30-9789-4a0b-b636-e6db16e1ce0d] Running
	I0520 11:47:30.577560   57519 system_pods.go:61] "kube-scheduler-no-preload-814599" [028942b5-5187-4b86-84ba-eba2cdb7a844] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:47:30.577567   57519 system_pods.go:61] "metrics-server-569cc877fc-v2zh9" [1be844eb-66ba-4a82-86fb-6f53de7e4d33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:47:30.577577   57519 system_pods.go:61] "storage-provisioner" [84c706cd-0245-4afa-bb7b-4644bd2784bd] Running
	I0520 11:47:30.577586   57519 system_pods.go:74] duration metric: took 9.439264ms to wait for pod list to return data ...
	I0520 11:47:30.577598   57519 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:47:30.581346   57519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:47:30.581367   57519 node_conditions.go:123] node cpu capacity is 2
	I0520 11:47:30.581377   57519 node_conditions.go:105] duration metric: took 3.772205ms to run NodePressure ...
	I0520 11:47:30.581398   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:30.849703   57519 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:47:30.853895   57519 kubeadm.go:733] kubelet initialised
	I0520 11:47:30.853917   57519 kubeadm.go:734] duration metric: took 4.191303ms waiting for restarted kubelet to initialise ...
	I0520 11:47:30.853925   57519 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:47:30.861807   57519 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.866768   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.866797   57519 pod_ready.go:81] duration metric: took 4.964994ms for pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.866809   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.866819   57519 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.871001   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "etcd-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.871027   57519 pod_ready.go:81] duration metric: took 4.200278ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.871038   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "etcd-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.871046   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.875615   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "kube-apiserver-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.875635   57519 pod_ready.go:81] duration metric: took 4.580378ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.875645   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "kube-apiserver-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.875655   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.971186   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.971210   57519 pod_ready.go:81] duration metric: took 95.545422ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.971219   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.971226   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:29.250626   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:29.265658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:29.265716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:29.307618   58316 cri.go:89] found id: ""
	I0520 11:47:29.307640   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.307651   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:29.307658   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:29.307717   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:29.351325   58316 cri.go:89] found id: ""
	I0520 11:47:29.351353   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.351364   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:29.351372   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:29.351444   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:29.392415   58316 cri.go:89] found id: ""
	I0520 11:47:29.392439   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.392449   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:29.392455   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:29.392501   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:29.435195   58316 cri.go:89] found id: ""
	I0520 11:47:29.435220   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.435227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:29.435233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:29.435287   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:29.476746   58316 cri.go:89] found id: ""
	I0520 11:47:29.476773   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.476784   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:29.476790   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:29.476875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:29.511513   58316 cri.go:89] found id: ""
	I0520 11:47:29.511545   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.511553   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:29.511559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:29.511621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:29.547563   58316 cri.go:89] found id: ""
	I0520 11:47:29.547583   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.547590   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:29.547598   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:29.547644   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:29.586438   58316 cri.go:89] found id: ""
	I0520 11:47:29.586464   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.586472   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:29.586480   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:29.586491   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:29.660795   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:29.660828   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:29.699538   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:29.699568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:29.751675   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:29.751711   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:29.766924   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:29.766960   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:29.840738   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:27.555356   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:29.556470   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:29.862807   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:32.364158   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:31.372203   57519 pod_ready.go:92] pod "kube-proxy-mqg49" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:31.372223   57519 pod_ready.go:81] duration metric: took 400.985089ms for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:31.372231   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:33.379905   57519 pod_ready.go:102] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:35.378764   57519 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:35.378793   57519 pod_ready.go:81] duration metric: took 4.006551156s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:35.378804   57519 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:32.341347   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:32.355727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:32.355784   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:32.399703   58316 cri.go:89] found id: ""
	I0520 11:47:32.399740   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.399749   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:32.399754   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:32.399813   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:32.440321   58316 cri.go:89] found id: ""
	I0520 11:47:32.440350   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.440362   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:32.440367   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:32.440427   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:32.477813   58316 cri.go:89] found id: ""
	I0520 11:47:32.477840   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.477848   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:32.477853   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:32.477899   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:32.524465   58316 cri.go:89] found id: ""
	I0520 11:47:32.524492   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.524507   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:32.524516   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:32.524574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:32.559287   58316 cri.go:89] found id: ""
	I0520 11:47:32.559316   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.559326   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:32.559334   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:32.559390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:32.595977   58316 cri.go:89] found id: ""
	I0520 11:47:32.596010   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.596022   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:32.596030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:32.596088   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:32.631700   58316 cri.go:89] found id: ""
	I0520 11:47:32.631730   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.631739   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:32.631746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:32.631818   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:32.668027   58316 cri.go:89] found id: ""
	I0520 11:47:32.668062   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.668074   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:32.668084   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:32.668097   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:32.719910   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:32.719940   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:32.733988   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:32.734011   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:32.805852   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:32.805871   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:32.805886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:32.884785   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:32.884814   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:35.425234   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:35.438512   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:35.438583   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:35.471964   58316 cri.go:89] found id: ""
	I0520 11:47:35.471989   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.471997   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:35.472003   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:35.472058   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:35.505663   58316 cri.go:89] found id: ""
	I0520 11:47:35.505695   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.505706   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:35.505714   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:35.505766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:35.540407   58316 cri.go:89] found id: ""
	I0520 11:47:35.540437   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.540447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:35.540454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:35.540516   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:35.576599   58316 cri.go:89] found id: ""
	I0520 11:47:35.576622   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.576629   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:35.576634   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:35.576689   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:35.610257   58316 cri.go:89] found id: ""
	I0520 11:47:35.610286   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.610297   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:35.610304   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:35.610367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:35.647305   58316 cri.go:89] found id: ""
	I0520 11:47:35.647330   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.647338   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:35.647344   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:35.647393   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:35.682149   58316 cri.go:89] found id: ""
	I0520 11:47:35.682180   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.682191   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:35.682198   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:35.682260   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:35.717536   58316 cri.go:89] found id: ""
	I0520 11:47:35.717561   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.717569   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:35.717576   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:35.717588   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:35.768734   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:35.768763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:35.783246   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:35.783276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:35.869299   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:35.869325   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:35.869343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:35.948083   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:35.948116   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:32.056046   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:34.555606   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:36.556231   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:34.862916   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:37.363328   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:37.385861   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:39.884432   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:38.498413   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:38.512549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:38.512627   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:38.545580   58316 cri.go:89] found id: ""
	I0520 11:47:38.545601   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.545608   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:38.545614   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:38.545660   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.583115   58316 cri.go:89] found id: ""
	I0520 11:47:38.583145   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.583156   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:38.583163   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:38.583210   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:38.619548   58316 cri.go:89] found id: ""
	I0520 11:47:38.619578   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.619588   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:38.619596   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:38.619664   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:38.664240   58316 cri.go:89] found id: ""
	I0520 11:47:38.664272   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.664282   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:38.664290   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:38.664347   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:38.703438   58316 cri.go:89] found id: ""
	I0520 11:47:38.703460   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.703468   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:38.703472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:38.703546   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:38.740712   58316 cri.go:89] found id: ""
	I0520 11:47:38.740742   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.740754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:38.740761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:38.740821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:38.785274   58316 cri.go:89] found id: ""
	I0520 11:47:38.785296   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.785305   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:38.785312   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:38.785370   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:38.827757   58316 cri.go:89] found id: ""
	I0520 11:47:38.827787   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.827798   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:38.827810   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:38.827825   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:38.880074   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:38.880105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:38.894643   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:38.894688   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:38.982821   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:38.982849   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:38.982872   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:39.066426   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:39.066457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:41.610439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:41.623306   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:41.623374   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:41.662703   58316 cri.go:89] found id: ""
	I0520 11:47:41.662731   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.662740   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:41.662747   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:41.662803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.557813   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.054616   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:39.863318   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:42.362965   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.893841   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:44.387666   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.699406   58316 cri.go:89] found id: ""
	I0520 11:47:41.699433   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.699443   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:41.699449   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:41.699493   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:41.733448   58316 cri.go:89] found id: ""
	I0520 11:47:41.733475   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.733485   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:41.733492   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:41.733556   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:41.782101   58316 cri.go:89] found id: ""
	I0520 11:47:41.782131   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.782142   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:41.782149   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:41.782213   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:41.828303   58316 cri.go:89] found id: ""
	I0520 11:47:41.828332   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.828343   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:41.828350   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:41.828438   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:41.865690   58316 cri.go:89] found id: ""
	I0520 11:47:41.865712   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.865721   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:41.865726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:41.865773   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:41.907954   58316 cri.go:89] found id: ""
	I0520 11:47:41.907982   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.907992   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:41.907999   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:41.908060   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:41.946853   58316 cri.go:89] found id: ""
	I0520 11:47:41.946879   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.946889   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:41.946901   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:41.946916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:42.001481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:42.001516   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:42.015383   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:42.015412   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:42.088155   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:42.088184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:42.088201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:42.172771   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:42.172801   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:44.714665   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:44.727782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:44.727866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:44.760377   58316 cri.go:89] found id: ""
	I0520 11:47:44.760405   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.760412   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:44.760418   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:44.760466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:44.793488   58316 cri.go:89] found id: ""
	I0520 11:47:44.793509   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.793517   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:44.793522   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:44.793570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:44.827752   58316 cri.go:89] found id: ""
	I0520 11:47:44.827777   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.827786   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:44.827792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:44.827840   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:44.864415   58316 cri.go:89] found id: ""
	I0520 11:47:44.864437   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.864446   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:44.864454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:44.864514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:44.904360   58316 cri.go:89] found id: ""
	I0520 11:47:44.904382   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.904391   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:44.904396   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:44.904452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:44.939969   58316 cri.go:89] found id: ""
	I0520 11:47:44.940000   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.940010   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:44.940019   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:44.940069   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:44.979521   58316 cri.go:89] found id: ""
	I0520 11:47:44.979544   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.979552   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:44.979557   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:44.979621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:45.012946   58316 cri.go:89] found id: ""
	I0520 11:47:45.012979   58316 logs.go:276] 0 containers: []
	W0520 11:47:45.012992   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:45.013002   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:45.013015   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:45.067582   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:45.067619   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:45.081497   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:45.081529   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:45.155312   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:45.155335   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:45.155351   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:45.232403   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:45.232439   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:43.055052   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:45.056688   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:44.863217   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:47.362713   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:46.886365   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.384568   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:47.774002   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:47.789020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:47.789078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:47.826284   58316 cri.go:89] found id: ""
	I0520 11:47:47.826319   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.826331   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:47.826340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:47.826404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:47.866588   58316 cri.go:89] found id: ""
	I0520 11:47:47.866610   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.866616   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:47.866621   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:47.866675   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:47.901556   58316 cri.go:89] found id: ""
	I0520 11:47:47.901582   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.901593   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:47.901600   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:47.901658   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:47.937563   58316 cri.go:89] found id: ""
	I0520 11:47:47.937586   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.937593   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:47.937599   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:47.937656   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:47.975493   58316 cri.go:89] found id: ""
	I0520 11:47:47.975517   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.975527   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:47.975535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:47.975594   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:48.012208   58316 cri.go:89] found id: ""
	I0520 11:47:48.012232   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.012239   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:48.012245   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:48.012303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:48.046951   58316 cri.go:89] found id: ""
	I0520 11:47:48.046979   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.046989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:48.046996   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:48.047057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:48.085611   58316 cri.go:89] found id: ""
	I0520 11:47:48.085639   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.085649   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:48.085659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:48.085673   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:48.159338   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:48.159359   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:48.159373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:48.238332   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:48.238362   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:48.282316   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:48.282346   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:48.333780   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:48.333811   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:50.849007   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:50.863710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:50.863766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:50.900260   58316 cri.go:89] found id: ""
	I0520 11:47:50.900281   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.900287   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:50.900294   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:50.900342   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:50.934796   58316 cri.go:89] found id: ""
	I0520 11:47:50.934817   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.934824   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:50.934830   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:50.934875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:50.972957   58316 cri.go:89] found id: ""
	I0520 11:47:50.972983   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.972992   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:50.973000   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:50.973073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:51.009354   58316 cri.go:89] found id: ""
	I0520 11:47:51.009392   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.009402   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:51.009410   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:51.009467   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:51.046268   58316 cri.go:89] found id: ""
	I0520 11:47:51.046297   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.046308   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:51.046316   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:51.046379   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:51.083513   58316 cri.go:89] found id: ""
	I0520 11:47:51.083538   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.083548   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:51.083554   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:51.083619   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:51.126736   58316 cri.go:89] found id: ""
	I0520 11:47:51.126761   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.126778   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:51.126784   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:51.126841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:51.166149   58316 cri.go:89] found id: ""
	I0520 11:47:51.166176   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.166187   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:51.166196   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:51.166207   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:51.218946   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:51.218989   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:51.233779   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:51.233813   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:51.312266   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:51.312289   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:51.312306   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:51.393884   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:51.393911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:47.056743   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.555489   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.363267   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:51.863214   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:51.385469   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:53.884960   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:55.885430   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:53.939605   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:53.952518   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:53.952585   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:53.995262   58316 cri.go:89] found id: ""
	I0520 11:47:53.995291   58316 logs.go:276] 0 containers: []
	W0520 11:47:53.995302   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:53.995309   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:53.995372   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:54.029357   58316 cri.go:89] found id: ""
	I0520 11:47:54.029379   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.029387   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:54.029393   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:54.029441   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:54.064902   58316 cri.go:89] found id: ""
	I0520 11:47:54.064937   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.064947   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:54.064954   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:54.065014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:54.102763   58316 cri.go:89] found id: ""
	I0520 11:47:54.102788   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.102795   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:54.102801   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:54.102866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:54.136345   58316 cri.go:89] found id: ""
	I0520 11:47:54.136373   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.136384   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:54.136392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:54.136456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:54.173106   58316 cri.go:89] found id: ""
	I0520 11:47:54.173135   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.173144   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:54.173151   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:54.173214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:54.209199   58316 cri.go:89] found id: ""
	I0520 11:47:54.209222   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.209229   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:54.209235   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:54.209301   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:54.247681   58316 cri.go:89] found id: ""
	I0520 11:47:54.247711   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.247722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:54.247731   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:54.247749   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:54.325604   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:54.325623   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:54.325635   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:54.405791   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:54.405832   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:54.445633   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:54.445671   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:54.495395   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:54.495424   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:52.055220   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:54.059922   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:56.554742   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:54.362355   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:56.862720   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:57.886732   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:00.387613   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:57.011083   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:57.025109   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:57.025171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:57.059268   58316 cri.go:89] found id: ""
	I0520 11:47:57.059299   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.059310   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:57.059318   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:57.059371   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:57.093744   58316 cri.go:89] found id: ""
	I0520 11:47:57.093770   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.093780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:57.093788   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:57.093854   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:57.128836   58316 cri.go:89] found id: ""
	I0520 11:47:57.128868   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.128879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:57.128886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:57.128967   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:57.163858   58316 cri.go:89] found id: ""
	I0520 11:47:57.163882   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.163890   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:57.163896   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:57.163951   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:57.198684   58316 cri.go:89] found id: ""
	I0520 11:47:57.198713   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.198724   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:57.198732   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:57.198794   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:57.235121   58316 cri.go:89] found id: ""
	I0520 11:47:57.235147   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.235155   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:57.235161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:57.235208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:57.279621   58316 cri.go:89] found id: ""
	I0520 11:47:57.279641   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.279650   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:57.279655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:57.279701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:57.316292   58316 cri.go:89] found id: ""
	I0520 11:47:57.316319   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.316329   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:57.316338   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:57.316350   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:57.330013   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:57.330037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:57.405736   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:57.405762   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:57.405776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:57.483126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:57.483160   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:57.527345   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:57.527375   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.081615   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:00.094782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:00.094841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:00.133435   58316 cri.go:89] found id: ""
	I0520 11:48:00.133470   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.133486   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:00.133492   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:00.133553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:00.169402   58316 cri.go:89] found id: ""
	I0520 11:48:00.169434   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.169445   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:00.169453   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:00.169515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:00.204702   58316 cri.go:89] found id: ""
	I0520 11:48:00.204728   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.204738   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:00.204745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:00.204803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:00.246341   58316 cri.go:89] found id: ""
	I0520 11:48:00.246373   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.246383   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:00.246391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:00.246452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:00.279937   58316 cri.go:89] found id: ""
	I0520 11:48:00.279961   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.279968   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:00.279973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:00.280021   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:00.314825   58316 cri.go:89] found id: ""
	I0520 11:48:00.314853   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.314864   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:00.314870   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:00.314937   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:00.354431   58316 cri.go:89] found id: ""
	I0520 11:48:00.354460   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.354471   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:00.354478   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:00.354538   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:00.391309   58316 cri.go:89] found id: ""
	I0520 11:48:00.391335   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.391342   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:00.391350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:00.391361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:00.471502   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:00.471535   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:00.508324   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:00.508352   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.559695   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:00.559728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:00.573677   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:00.573709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:00.648102   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:58.558508   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:01.055219   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:58.862939   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:01.362135   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:02.886809   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:04.887274   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:03.148336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:03.162792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:03.162922   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:03.202620   58316 cri.go:89] found id: ""
	I0520 11:48:03.202656   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.202667   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:03.202674   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:03.202732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:03.243434   58316 cri.go:89] found id: ""
	I0520 11:48:03.243460   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.243468   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:03.243474   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:03.243528   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:03.278517   58316 cri.go:89] found id: ""
	I0520 11:48:03.278545   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.278553   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:03.278560   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:03.278624   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:03.317253   58316 cri.go:89] found id: ""
	I0520 11:48:03.317281   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.317291   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:03.317298   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:03.317360   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:03.352394   58316 cri.go:89] found id: ""
	I0520 11:48:03.352416   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.352425   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:03.352431   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:03.352480   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:03.393016   58316 cri.go:89] found id: ""
	I0520 11:48:03.393044   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.393055   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:03.393063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:03.393134   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:03.434436   58316 cri.go:89] found id: ""
	I0520 11:48:03.434461   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.434468   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:03.434474   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:03.434531   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:03.476703   58316 cri.go:89] found id: ""
	I0520 11:48:03.476728   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.476737   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:03.476746   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:03.476761   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:03.529959   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:03.529996   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:03.544741   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:03.544770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:03.614526   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:03.614552   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:03.614568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:03.695336   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:03.695368   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:06.233916   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:06.249026   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:06.249083   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:06.288191   58316 cri.go:89] found id: ""
	I0520 11:48:06.288223   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.288233   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:06.288246   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:06.288307   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:06.328602   58316 cri.go:89] found id: ""
	I0520 11:48:06.328639   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.328647   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:06.328653   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:06.328710   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:06.373030   58316 cri.go:89] found id: ""
	I0520 11:48:06.373053   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.373062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:06.373069   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:06.373130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:06.410520   58316 cri.go:89] found id: ""
	I0520 11:48:06.410542   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.410551   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:06.410559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:06.410625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:06.448468   58316 cri.go:89] found id: ""
	I0520 11:48:06.448493   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.448505   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:06.448511   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:06.448571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:06.507441   58316 cri.go:89] found id: ""
	I0520 11:48:06.507468   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.507477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:06.507485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:06.507545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:06.547174   58316 cri.go:89] found id: ""
	I0520 11:48:06.547201   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.547212   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:06.547220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:06.547283   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:06.580692   58316 cri.go:89] found id: ""
	I0520 11:48:06.580715   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.580722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:06.580730   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:06.580741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:06.634822   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:06.634855   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:06.649107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:06.649132   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:48:03.555849   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:06.059026   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:03.363379   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:05.863603   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:07.386458   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:09.884793   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	W0520 11:48:06.717523   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:06.717548   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:06.717562   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:06.806538   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:06.806571   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:09.347195   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:09.362597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:09.362677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:09.401314   58316 cri.go:89] found id: ""
	I0520 11:48:09.401338   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.401346   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:09.401353   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:09.401411   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:09.436394   58316 cri.go:89] found id: ""
	I0520 11:48:09.436418   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.436426   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:09.436431   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:09.436484   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:09.470407   58316 cri.go:89] found id: ""
	I0520 11:48:09.470437   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.470447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:09.470462   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:09.470514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:09.507720   58316 cri.go:89] found id: ""
	I0520 11:48:09.507747   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.507758   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:09.507763   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:09.507824   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:09.548056   58316 cri.go:89] found id: ""
	I0520 11:48:09.548084   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.548093   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:09.548106   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:09.548172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:09.585187   58316 cri.go:89] found id: ""
	I0520 11:48:09.585214   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.585222   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:09.585227   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:09.585285   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:09.622695   58316 cri.go:89] found id: ""
	I0520 11:48:09.622718   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.622725   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:09.622731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:09.622796   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:09.662751   58316 cri.go:89] found id: ""
	I0520 11:48:09.662775   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.662784   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:09.662792   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:09.662804   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:09.715669   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:09.715709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:09.729301   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:09.729333   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:09.799189   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:09.799213   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:09.799228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:09.885889   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:09.885916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:08.554935   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:10.555580   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:08.362397   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:10.863960   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:12.385193   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:14.385749   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:12.431556   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:12.445470   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:12.445545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:12.482161   58316 cri.go:89] found id: ""
	I0520 11:48:12.482186   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.482195   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:12.482201   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:12.482251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:12.520051   58316 cri.go:89] found id: ""
	I0520 11:48:12.520077   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.520090   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:12.520096   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:12.520143   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:12.559856   58316 cri.go:89] found id: ""
	I0520 11:48:12.559878   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.559886   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:12.559891   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:12.559947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:12.599657   58316 cri.go:89] found id: ""
	I0520 11:48:12.599679   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.599687   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:12.599693   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:12.599739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:12.640514   58316 cri.go:89] found id: ""
	I0520 11:48:12.640539   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.640547   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:12.640552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:12.640601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:12.680337   58316 cri.go:89] found id: ""
	I0520 11:48:12.680367   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.680378   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:12.680384   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:12.680433   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:12.718580   58316 cri.go:89] found id: ""
	I0520 11:48:12.718615   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.718624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:12.718629   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:12.718677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:12.760438   58316 cri.go:89] found id: ""
	I0520 11:48:12.760463   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.760471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:12.760479   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:12.760493   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:12.816564   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:12.816599   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:12.831107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:12.831130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:12.931165   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:12.931184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:12.931199   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:13.035436   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:13.035478   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:15.578999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:15.593185   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:15.593280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:15.629644   58316 cri.go:89] found id: ""
	I0520 11:48:15.629672   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.629682   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:15.629689   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:15.629749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:15.668639   58316 cri.go:89] found id: ""
	I0520 11:48:15.668670   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.668680   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:15.668688   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:15.668752   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:15.712844   58316 cri.go:89] found id: ""
	I0520 11:48:15.712868   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.712876   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:15.712881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:15.712954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:15.748969   58316 cri.go:89] found id: ""
	I0520 11:48:15.748998   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.749008   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:15.749020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:15.749085   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:15.784625   58316 cri.go:89] found id: ""
	I0520 11:48:15.784658   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.784666   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:15.784672   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:15.784724   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:15.825288   58316 cri.go:89] found id: ""
	I0520 11:48:15.825319   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.825326   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:15.825332   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:15.825390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:15.861589   58316 cri.go:89] found id: ""
	I0520 11:48:15.861617   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.861628   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:15.861635   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:15.861691   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:15.897115   58316 cri.go:89] found id: ""
	I0520 11:48:15.897143   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.897155   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:15.897166   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:15.897188   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:15.971286   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:15.971307   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:15.971323   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:16.049567   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:16.049612   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:16.090116   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:16.090145   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:16.140151   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:16.140187   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:13.058312   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:15.556249   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:13.363188   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:15.363224   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:17.861944   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:16.884628   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:18.885335   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:20.887306   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:18.655048   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:18.671663   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:18.671720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:18.721288   58316 cri.go:89] found id: ""
	I0520 11:48:18.721315   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.721333   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:18.721340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:18.721407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:18.765025   58316 cri.go:89] found id: ""
	I0520 11:48:18.765054   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.765064   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:18.765069   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:18.765135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:18.825054   58316 cri.go:89] found id: ""
	I0520 11:48:18.825083   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.825094   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:18.825110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:18.825172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:18.861528   58316 cri.go:89] found id: ""
	I0520 11:48:18.861550   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.861560   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:18.861566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:18.861630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:18.899943   58316 cri.go:89] found id: ""
	I0520 11:48:18.899970   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.899980   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:18.899986   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:18.900063   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:18.936465   58316 cri.go:89] found id: ""
	I0520 11:48:18.936497   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.936507   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:18.936514   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:18.936570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:18.970599   58316 cri.go:89] found id: ""
	I0520 11:48:18.970636   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.970645   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:18.970653   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:18.970718   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:19.004507   58316 cri.go:89] found id: ""
	I0520 11:48:19.004538   58316 logs.go:276] 0 containers: []
	W0520 11:48:19.004547   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:19.004558   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:19.004573   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:19.057482   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:19.057521   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:19.072189   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:19.072211   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:19.148331   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:19.148356   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:19.148373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:19.228421   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:19.228456   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:18.055244   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:20.056597   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:19.862408   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:21.862760   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:23.386334   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:25.388116   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:21.771146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:21.786547   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:21.786620   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:21.829816   58316 cri.go:89] found id: ""
	I0520 11:48:21.829846   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.829858   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:21.829865   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:21.829925   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:21.867480   58316 cri.go:89] found id: ""
	I0520 11:48:21.867502   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.867510   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:21.867515   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:21.867571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:21.904737   58316 cri.go:89] found id: ""
	I0520 11:48:21.904760   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.904767   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:21.904772   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:21.904841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:21.940431   58316 cri.go:89] found id: ""
	I0520 11:48:21.940457   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.940465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:21.940482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:21.940543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:21.976054   58316 cri.go:89] found id: ""
	I0520 11:48:21.976087   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.976094   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:21.976100   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:21.976158   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:22.020035   58316 cri.go:89] found id: ""
	I0520 11:48:22.020104   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.020116   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:22.020123   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:22.020189   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:22.060981   58316 cri.go:89] found id: ""
	I0520 11:48:22.061002   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.061010   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:22.061015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:22.061059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:22.102854   58316 cri.go:89] found id: ""
	I0520 11:48:22.102875   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.102882   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:22.102892   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:22.102906   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:22.158901   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:22.158933   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.172825   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:22.172850   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:22.246602   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:22.246625   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:22.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:22.325862   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:22.325899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:24.868449   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:24.882851   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:24.882921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:24.922574   58316 cri.go:89] found id: ""
	I0520 11:48:24.922600   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.922610   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:24.922617   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:24.922677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:24.958786   58316 cri.go:89] found id: ""
	I0520 11:48:24.958817   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.958845   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:24.958853   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:24.958906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:24.993346   58316 cri.go:89] found id: ""
	I0520 11:48:24.993375   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.993384   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:24.993391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:24.993450   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:25.026840   58316 cri.go:89] found id: ""
	I0520 11:48:25.026863   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.026870   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:25.026875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:25.026920   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:25.065675   58316 cri.go:89] found id: ""
	I0520 11:48:25.065710   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.065719   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:25.065727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:25.065791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:25.100955   58316 cri.go:89] found id: ""
	I0520 11:48:25.100986   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.100997   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:25.101004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:25.101062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:25.134516   58316 cri.go:89] found id: ""
	I0520 11:48:25.134544   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.134554   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:25.134562   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:25.134626   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:25.174455   58316 cri.go:89] found id: ""
	I0520 11:48:25.174482   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.174491   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:25.174501   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:25.174527   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:25.245557   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:25.245579   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:25.245592   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:25.324197   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:25.324228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:25.362963   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:25.362995   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:25.420778   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:25.420807   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.555469   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:24.556030   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:24.362744   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:26.865098   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:27.884029   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.885238   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:27.935757   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:27.949787   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:27.949873   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:27.985142   58316 cri.go:89] found id: ""
	I0520 11:48:27.985174   58316 logs.go:276] 0 containers: []
	W0520 11:48:27.985185   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:27.985191   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:27.985251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:28.020044   58316 cri.go:89] found id: ""
	I0520 11:48:28.020074   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.020082   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:28.020088   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:28.020135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:28.057075   58316 cri.go:89] found id: ""
	I0520 11:48:28.057114   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.057125   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:28.057136   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:28.057182   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:28.093505   58316 cri.go:89] found id: ""
	I0520 11:48:28.093527   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.093534   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:28.093540   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:28.093596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:28.129402   58316 cri.go:89] found id: ""
	I0520 11:48:28.129431   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.129440   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:28.129447   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:28.129505   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:28.166558   58316 cri.go:89] found id: ""
	I0520 11:48:28.166582   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.166590   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:28.166601   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:28.166649   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:28.206833   58316 cri.go:89] found id: ""
	I0520 11:48:28.206863   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.206873   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:28.206881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:28.206943   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:28.246571   58316 cri.go:89] found id: ""
	I0520 11:48:28.246605   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.246617   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:28.246628   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:28.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:28.297849   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:28.297882   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:28.311578   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:28.311603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:28.387379   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:28.387399   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:28.387410   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:28.466506   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:28.466540   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.007382   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:31.021671   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:31.021751   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:31.059965   58316 cri.go:89] found id: ""
	I0520 11:48:31.059987   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.059996   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:31.060004   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:31.060062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:31.095664   58316 cri.go:89] found id: ""
	I0520 11:48:31.095686   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.095693   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:31.095699   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:31.095749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:31.131510   58316 cri.go:89] found id: ""
	I0520 11:48:31.131531   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.131538   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:31.131544   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:31.131596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:31.166962   58316 cri.go:89] found id: ""
	I0520 11:48:31.167000   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.167011   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:31.167018   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:31.167082   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:31.202831   58316 cri.go:89] found id: ""
	I0520 11:48:31.202856   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.202863   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:31.202868   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:31.202927   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:31.239365   58316 cri.go:89] found id: ""
	I0520 11:48:31.239397   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.239409   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:31.239417   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:31.239477   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:31.276613   58316 cri.go:89] found id: ""
	I0520 11:48:31.276641   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.276651   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:31.276658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:31.276720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:31.312008   58316 cri.go:89] found id: ""
	I0520 11:48:31.312040   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.312047   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:31.312056   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:31.312066   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:31.394355   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:31.394383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.439251   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:31.439277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:31.494313   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:31.494341   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:31.508675   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:31.508698   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:31.576022   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:27.054447   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.056075   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.556385   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.368181   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.862789   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.885337   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:33.885831   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:34.076333   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:34.089640   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:34.089704   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:34.124848   58316 cri.go:89] found id: ""
	I0520 11:48:34.124869   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.124876   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:34.124881   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:34.124948   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:34.159885   58316 cri.go:89] found id: ""
	I0520 11:48:34.159911   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.159920   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:34.159925   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:34.159988   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:34.194029   58316 cri.go:89] found id: ""
	I0520 11:48:34.194051   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.194058   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:34.194063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:34.194135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:34.231412   58316 cri.go:89] found id: ""
	I0520 11:48:34.231435   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.231442   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:34.231448   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:34.231497   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:34.268679   58316 cri.go:89] found id: ""
	I0520 11:48:34.268709   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.268721   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:34.268729   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:34.268795   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:34.313585   58316 cri.go:89] found id: ""
	I0520 11:48:34.313607   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.313613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:34.313618   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:34.313681   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:34.348685   58316 cri.go:89] found id: ""
	I0520 11:48:34.348712   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.348723   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:34.348730   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:34.348788   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:34.393438   58316 cri.go:89] found id: ""
	I0520 11:48:34.393462   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.393471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:34.393480   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:34.393495   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:34.468828   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:34.468859   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:34.468877   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:34.553796   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:34.553827   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:34.594968   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:34.594992   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:34.647814   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:34.647847   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:34.055168   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.056035   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:34.362591   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.863645   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.385081   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:38.884785   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:40.886065   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:37.161573   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:37.176133   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:37.176214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:37.214694   58316 cri.go:89] found id: ""
	I0520 11:48:37.214719   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.214729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:37.214736   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:37.214803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:37.259745   58316 cri.go:89] found id: ""
	I0520 11:48:37.259772   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.259783   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:37.259790   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:37.259852   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:37.299887   58316 cri.go:89] found id: ""
	I0520 11:48:37.299920   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.299931   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:37.299938   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:37.300003   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:37.337428   58316 cri.go:89] found id: ""
	I0520 11:48:37.337456   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.337465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:37.337472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:37.337532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:37.379635   58316 cri.go:89] found id: ""
	I0520 11:48:37.379662   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.379672   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:37.379680   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:37.379739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:37.421071   58316 cri.go:89] found id: ""
	I0520 11:48:37.421104   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.421112   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:37.421118   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:37.421164   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:37.460130   58316 cri.go:89] found id: ""
	I0520 11:48:37.460154   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.460162   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:37.460168   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:37.460225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:37.497028   58316 cri.go:89] found id: ""
	I0520 11:48:37.497053   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.497063   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:37.497074   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:37.497089   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:37.550632   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:37.550659   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:37.566172   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:37.566201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:37.645132   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:37.645150   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:37.645163   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:37.720923   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:37.720966   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:40.260330   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:40.275233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:40.275294   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:40.310450   58316 cri.go:89] found id: ""
	I0520 11:48:40.310479   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.310488   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:40.310494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:40.310543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:40.346171   58316 cri.go:89] found id: ""
	I0520 11:48:40.346200   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.346211   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:40.346217   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:40.346278   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:40.382644   58316 cri.go:89] found id: ""
	I0520 11:48:40.382666   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.382676   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:40.382684   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:40.382739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:40.422950   58316 cri.go:89] found id: ""
	I0520 11:48:40.422974   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.422984   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:40.422990   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:40.423050   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:40.458818   58316 cri.go:89] found id: ""
	I0520 11:48:40.458864   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.458876   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:40.458884   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:40.458935   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:40.496438   58316 cri.go:89] found id: ""
	I0520 11:48:40.496462   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.496471   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:40.496476   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:40.496530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:40.532432   58316 cri.go:89] found id: ""
	I0520 11:48:40.532457   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.532465   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:40.532471   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:40.532530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:40.572628   58316 cri.go:89] found id: ""
	I0520 11:48:40.572652   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.572660   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:40.572667   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:40.572678   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:40.623158   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:40.623189   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:40.636609   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:40.636633   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:40.710836   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:40.710866   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:40.710880   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:40.795386   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:40.795420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:38.556294   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:40.556701   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:39.362479   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:41.861916   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.387356   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:45.884457   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.334999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:43.348481   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:43.348553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:43.385528   58316 cri.go:89] found id: ""
	I0520 11:48:43.385554   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.385564   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:43.385571   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:43.385630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:43.424628   58316 cri.go:89] found id: ""
	I0520 11:48:43.424659   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.424676   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:43.424685   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:43.424746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:43.464043   58316 cri.go:89] found id: ""
	I0520 11:48:43.464075   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.464091   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:43.464097   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:43.464171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:43.501860   58316 cri.go:89] found id: ""
	I0520 11:48:43.501889   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.501899   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:43.501907   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:43.501970   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:43.541881   58316 cri.go:89] found id: ""
	I0520 11:48:43.541910   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.541919   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:43.541926   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:43.541992   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:43.578759   58316 cri.go:89] found id: ""
	I0520 11:48:43.578782   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.578789   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:43.578794   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:43.578842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:43.618012   58316 cri.go:89] found id: ""
	I0520 11:48:43.618042   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.618050   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:43.618058   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:43.618117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.654219   58316 cri.go:89] found id: ""
	I0520 11:48:43.654243   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.654250   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:43.654258   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:43.654273   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:43.667782   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:43.667805   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:43.740111   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:43.740138   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:43.740155   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:43.823936   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:43.823974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:43.865045   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:43.865070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.421300   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:46.434802   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:46.434878   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:46.469361   58316 cri.go:89] found id: ""
	I0520 11:48:46.469392   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.469401   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:46.469407   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:46.469460   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:46.503922   58316 cri.go:89] found id: ""
	I0520 11:48:46.503950   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.503960   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:46.503968   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:46.504025   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:46.542916   58316 cri.go:89] found id: ""
	I0520 11:48:46.542942   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.542950   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:46.542955   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:46.543011   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:46.581532   58316 cri.go:89] found id: ""
	I0520 11:48:46.581550   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.581558   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:46.581563   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:46.581623   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:46.618650   58316 cri.go:89] found id: ""
	I0520 11:48:46.618677   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.618684   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:46.618689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:46.618740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:46.655409   58316 cri.go:89] found id: ""
	I0520 11:48:46.655431   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.655439   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:46.655445   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:46.655491   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:46.690494   58316 cri.go:89] found id: ""
	I0520 11:48:46.690519   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.690526   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:46.690531   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:46.690579   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.056625   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:45.555575   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.864183   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:46.362155   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:47.885261   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:49.886656   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:46.724684   58316 cri.go:89] found id: ""
	I0520 11:48:46.724715   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.724725   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:46.724737   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:46.724752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:46.807899   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:46.807931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:46.850852   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:46.850883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.903474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:46.903502   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:46.917073   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:46.917105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:47.006606   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.507632   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:49.521080   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:49.521160   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:49.558639   58316 cri.go:89] found id: ""
	I0520 11:48:49.558671   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.558678   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:49.558684   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:49.558732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:49.595470   58316 cri.go:89] found id: ""
	I0520 11:48:49.595497   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.595506   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:49.595513   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:49.595573   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:49.630164   58316 cri.go:89] found id: ""
	I0520 11:48:49.630193   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.630204   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:49.630211   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:49.630272   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:49.664835   58316 cri.go:89] found id: ""
	I0520 11:48:49.664863   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.664873   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:49.664882   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:49.664954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:49.700428   58316 cri.go:89] found id: ""
	I0520 11:48:49.700483   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.700497   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:49.700504   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:49.700569   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:49.737184   58316 cri.go:89] found id: ""
	I0520 11:48:49.737216   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.737226   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:49.737234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:49.737303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:49.772577   58316 cri.go:89] found id: ""
	I0520 11:48:49.772601   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.772611   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:49.772617   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:49.772678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:49.806926   58316 cri.go:89] found id: ""
	I0520 11:48:49.806954   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.806966   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:49.806977   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:49.806991   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:49.861208   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:49.861242   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:49.875054   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:49.875084   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:49.949556   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.949582   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:49.949596   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:50.032181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:50.032226   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:48.055288   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:50.057220   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:48.362952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:50.363988   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.863467   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.385482   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:54.386207   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.578486   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:52.592465   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:52.592534   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:52.627795   58316 cri.go:89] found id: ""
	I0520 11:48:52.627828   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.627838   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:52.627845   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:52.627906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:52.663863   58316 cri.go:89] found id: ""
	I0520 11:48:52.663889   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.663896   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:52.663902   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:52.663956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:52.697912   58316 cri.go:89] found id: ""
	I0520 11:48:52.697937   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.697944   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:52.697950   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:52.698010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:52.751379   58316 cri.go:89] found id: ""
	I0520 11:48:52.751401   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.751408   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:52.751413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:52.751473   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:52.786683   58316 cri.go:89] found id: ""
	I0520 11:48:52.786709   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.786716   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:52.786721   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:52.786782   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:52.821720   58316 cri.go:89] found id: ""
	I0520 11:48:52.821747   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.821758   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:52.821765   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:52.821828   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:52.858569   58316 cri.go:89] found id: ""
	I0520 11:48:52.858596   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.858607   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:52.858615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:52.858671   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:52.897524   58316 cri.go:89] found id: ""
	I0520 11:48:52.897551   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.897560   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:52.897569   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:52.897584   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:52.977792   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:52.977816   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:52.977831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:53.064964   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:53.065009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:53.108613   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:53.108649   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:53.164692   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:53.164728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:55.679593   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:55.693505   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:55.693576   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:55.730013   58316 cri.go:89] found id: ""
	I0520 11:48:55.730043   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.730053   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:55.730061   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:55.730121   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:55.769739   58316 cri.go:89] found id: ""
	I0520 11:48:55.769769   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.769780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:55.769787   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:55.769842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:55.809831   58316 cri.go:89] found id: ""
	I0520 11:48:55.809867   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.809878   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:55.809886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:55.809946   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:55.843487   58316 cri.go:89] found id: ""
	I0520 11:48:55.843513   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.843521   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:55.843527   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:55.843586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:55.883327   58316 cri.go:89] found id: ""
	I0520 11:48:55.883352   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.883362   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:55.883372   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:55.883430   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:55.921507   58316 cri.go:89] found id: ""
	I0520 11:48:55.921534   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.921544   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:55.921552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:55.921612   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:55.957042   58316 cri.go:89] found id: ""
	I0520 11:48:55.957066   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.957073   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:55.957079   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:55.957141   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:55.990871   58316 cri.go:89] found id: ""
	I0520 11:48:55.990899   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.990907   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:55.990916   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:55.990931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:56.041345   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:56.041373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:56.055628   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:56.055653   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:56.128845   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:56.128867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:56.128883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:56.218181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:56.218215   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:52.554824   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:54.557133   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:55.362033   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:57.362624   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:56.885083   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:58.885662   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:58.760650   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:58.774991   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:58.775048   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:58.817777   58316 cri.go:89] found id: ""
	I0520 11:48:58.817807   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.817819   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:58.817826   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:58.817885   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:58.855909   58316 cri.go:89] found id: ""
	I0520 11:48:58.855934   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.855942   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:58.855949   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:58.856010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:58.893369   58316 cri.go:89] found id: ""
	I0520 11:48:58.893396   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.893405   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:58.893413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:58.893475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:58.929975   58316 cri.go:89] found id: ""
	I0520 11:48:58.930001   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.930010   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:58.930017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:58.930067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:58.963865   58316 cri.go:89] found id: ""
	I0520 11:48:58.963890   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.963898   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:58.963904   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:58.963954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:59.000604   58316 cri.go:89] found id: ""
	I0520 11:48:59.000624   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.000631   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:59.000637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:59.000685   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:59.037408   58316 cri.go:89] found id: ""
	I0520 11:48:59.037433   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.037442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:59.037452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:59.037512   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:59.073087   58316 cri.go:89] found id: ""
	I0520 11:48:59.073109   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.073118   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:59.073126   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:59.073142   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:59.124885   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:59.124916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:59.154020   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:59.154047   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:59.277149   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:59.277199   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:59.277216   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:59.363126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:59.363157   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:57.055207   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:59.055409   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.554795   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:59.362869   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.363107   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.385471   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:03.386313   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:05.386387   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.906903   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:01.923346   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:01.923407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:01.961612   58316 cri.go:89] found id: ""
	I0520 11:49:01.961635   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.961643   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:01.961648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:01.961702   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:01.997803   58316 cri.go:89] found id: ""
	I0520 11:49:01.997830   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.997839   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:01.997844   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:01.997894   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:02.032142   58316 cri.go:89] found id: ""
	I0520 11:49:02.032169   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.032177   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:02.032182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:02.032226   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:02.070641   58316 cri.go:89] found id: ""
	I0520 11:49:02.070664   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.070671   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:02.070676   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:02.070729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:02.107270   58316 cri.go:89] found id: ""
	I0520 11:49:02.107296   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.107302   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:02.107308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:02.107356   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:02.144676   58316 cri.go:89] found id: ""
	I0520 11:49:02.144702   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.144709   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:02.144717   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:02.144798   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:02.184382   58316 cri.go:89] found id: ""
	I0520 11:49:02.184403   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.184410   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:02.184416   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:02.184466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:02.223110   58316 cri.go:89] found id: ""
	I0520 11:49:02.223140   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.223152   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:02.223164   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:02.223181   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:02.236272   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:02.236305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:02.312046   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:02.312065   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:02.312076   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:02.397284   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:02.397313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:02.438720   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:02.438755   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:04.988565   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:05.005842   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:05.005906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:05.071468   58316 cri.go:89] found id: ""
	I0520 11:49:05.071496   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.071505   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:05.071515   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:05.071574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:05.125767   58316 cri.go:89] found id: ""
	I0520 11:49:05.125793   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.125801   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:05.125806   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:05.125868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:05.162651   58316 cri.go:89] found id: ""
	I0520 11:49:05.162676   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.162684   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:05.162689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:05.162746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:05.205140   58316 cri.go:89] found id: ""
	I0520 11:49:05.205168   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.205176   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:05.205182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:05.205245   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:05.245187   58316 cri.go:89] found id: ""
	I0520 11:49:05.245209   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.245216   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:05.245221   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:05.245280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:05.287698   58316 cri.go:89] found id: ""
	I0520 11:49:05.287728   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.287739   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:05.287746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:05.287809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:05.325607   58316 cri.go:89] found id: ""
	I0520 11:49:05.325637   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.325649   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:05.325655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:05.325716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:05.360662   58316 cri.go:89] found id: ""
	I0520 11:49:05.360690   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.360700   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:05.360711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:05.360725   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:05.402342   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:05.402376   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:05.454403   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:05.454436   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:05.468882   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:05.468912   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:05.546520   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:05.546546   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:05.546567   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:03.555624   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:06.055651   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:03.862582   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:05.862917   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:07.863213   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:07.886494   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.384808   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:08.128785   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:08.144963   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:08.145053   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:08.190121   58316 cri.go:89] found id: ""
	I0520 11:49:08.190154   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.190165   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:08.190174   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:08.190253   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:08.227310   58316 cri.go:89] found id: ""
	I0520 11:49:08.227334   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.227341   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:08.227346   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:08.227390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:08.262606   58316 cri.go:89] found id: ""
	I0520 11:49:08.262631   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.262639   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:08.262646   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:08.262706   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:08.297061   58316 cri.go:89] found id: ""
	I0520 11:49:08.297092   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.297103   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:08.297110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:08.297170   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:08.337356   58316 cri.go:89] found id: ""
	I0520 11:49:08.337394   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.337410   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:08.337418   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:08.337481   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:08.372576   58316 cri.go:89] found id: ""
	I0520 11:49:08.372603   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.372613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:08.372620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:08.372677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:08.409394   58316 cri.go:89] found id: ""
	I0520 11:49:08.409427   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.409442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:08.409449   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:08.409562   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:08.445119   58316 cri.go:89] found id: ""
	I0520 11:49:08.445145   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.445153   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:08.445161   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:08.445171   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:08.457873   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:08.457899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:08.530873   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:08.530902   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:08.530920   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.609655   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:08.609695   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:08.656568   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:08.656601   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.209318   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:11.223883   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:11.223958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:11.260006   58316 cri.go:89] found id: ""
	I0520 11:49:11.260047   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.260060   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:11.260068   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:11.260129   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:11.296348   58316 cri.go:89] found id: ""
	I0520 11:49:11.296375   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.296385   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:11.296392   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:11.296452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:11.336546   58316 cri.go:89] found id: ""
	I0520 11:49:11.336573   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.336583   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:11.336590   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:11.336652   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:11.373216   58316 cri.go:89] found id: ""
	I0520 11:49:11.373244   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.373254   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:11.373261   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:11.373320   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:11.411555   58316 cri.go:89] found id: ""
	I0520 11:49:11.411586   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.411596   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:11.411604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:11.411663   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:11.446390   58316 cri.go:89] found id: ""
	I0520 11:49:11.446419   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.446428   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:11.446433   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:11.446489   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:11.481584   58316 cri.go:89] found id: ""
	I0520 11:49:11.481613   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.481624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:11.481637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:11.481701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:11.518092   58316 cri.go:89] found id: ""
	I0520 11:49:11.518121   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.518131   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:11.518143   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:11.518158   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.569824   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:11.569859   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:11.583600   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:11.583620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:11.663761   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:11.663799   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:11.663817   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.055875   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.056882   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.362663   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:12.863246   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:12.388256   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:14.887822   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:11.743430   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:11.743462   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:14.289940   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:14.307574   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:14.307645   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:14.349602   58316 cri.go:89] found id: ""
	I0520 11:49:14.349628   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.349637   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:14.349644   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:14.349701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:14.391484   58316 cri.go:89] found id: ""
	I0520 11:49:14.391508   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.391516   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:14.391521   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:14.391581   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:14.437859   58316 cri.go:89] found id: ""
	I0520 11:49:14.437884   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.437894   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:14.437902   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:14.437965   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:14.474353   58316 cri.go:89] found id: ""
	I0520 11:49:14.474379   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.474387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:14.474392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:14.474439   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:14.517430   58316 cri.go:89] found id: ""
	I0520 11:49:14.517459   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.517466   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:14.517472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:14.517523   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:14.552166   58316 cri.go:89] found id: ""
	I0520 11:49:14.552196   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.552207   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:14.552220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:14.552280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:14.589295   58316 cri.go:89] found id: ""
	I0520 11:49:14.589322   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.589330   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:14.589336   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:14.589396   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:14.626221   58316 cri.go:89] found id: ""
	I0520 11:49:14.626244   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.626251   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:14.626259   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:14.626271   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:14.681867   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:14.681899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:14.695328   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:14.695355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:14.773077   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:14.773102   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:14.773118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:14.857344   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:14.857378   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:12.557989   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:15.056569   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:14.864153   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:17.362222   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:16.888363   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.385523   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:17.402259   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:17.415895   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:17.415956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:17.458403   58316 cri.go:89] found id: ""
	I0520 11:49:17.458432   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.458443   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:17.458450   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:17.458511   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:17.495603   58316 cri.go:89] found id: ""
	I0520 11:49:17.495633   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.495650   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:17.495657   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:17.495716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:17.535416   58316 cri.go:89] found id: ""
	I0520 11:49:17.535440   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.535448   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:17.535452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:17.535515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:17.574842   58316 cri.go:89] found id: ""
	I0520 11:49:17.574868   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.574875   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:17.574880   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:17.574933   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:17.608744   58316 cri.go:89] found id: ""
	I0520 11:49:17.608773   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.608783   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:17.608791   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:17.608864   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:17.645101   58316 cri.go:89] found id: ""
	I0520 11:49:17.645126   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.645134   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:17.645140   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:17.645199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:17.682620   58316 cri.go:89] found id: ""
	I0520 11:49:17.682651   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.682661   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:17.682669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:17.682725   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:17.718392   58316 cri.go:89] found id: ""
	I0520 11:49:17.718414   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.718422   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:17.718430   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:17.718441   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:17.772103   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:17.772138   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:17.786071   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:17.786102   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:17.859596   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:17.859617   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:17.859634   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:17.938723   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:17.938754   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:20.480539   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:20.493677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:20.493745   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:20.528213   58316 cri.go:89] found id: ""
	I0520 11:49:20.528241   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.528251   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:20.528258   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:20.528344   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:20.564821   58316 cri.go:89] found id: ""
	I0520 11:49:20.564844   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.564853   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:20.564860   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:20.564921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:20.599015   58316 cri.go:89] found id: ""
	I0520 11:49:20.599051   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.599063   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:20.599070   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:20.599125   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:20.637115   58316 cri.go:89] found id: ""
	I0520 11:49:20.637160   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.637169   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:20.637175   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:20.637221   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:20.672699   58316 cri.go:89] found id: ""
	I0520 11:49:20.672728   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.672738   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:20.672745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:20.672809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:20.709278   58316 cri.go:89] found id: ""
	I0520 11:49:20.709309   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.709321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:20.709327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:20.709394   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:20.743159   58316 cri.go:89] found id: ""
	I0520 11:49:20.743182   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.743188   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:20.743194   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:20.743239   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:20.780848   58316 cri.go:89] found id: ""
	I0520 11:49:20.780878   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.780890   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:20.780902   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:20.780917   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:20.834178   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:20.834206   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:20.848026   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:20.848049   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:20.926067   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:20.926088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:20.926103   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:21.003153   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:21.003184   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:17.056820   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.556351   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.362408   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:21.861822   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:21.386338   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.390122   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:25.885748   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.545830   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:23.560697   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:23.560781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:23.595691   58316 cri.go:89] found id: ""
	I0520 11:49:23.595719   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.595729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:23.595737   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:23.595805   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:23.634526   58316 cri.go:89] found id: ""
	I0520 11:49:23.634555   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.634565   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:23.634573   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:23.634631   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:23.670101   58316 cri.go:89] found id: ""
	I0520 11:49:23.670144   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.670155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:23.670169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:23.670228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:23.704352   58316 cri.go:89] found id: ""
	I0520 11:49:23.704376   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.704384   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:23.704389   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:23.704435   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:23.745633   58316 cri.go:89] found id: ""
	I0520 11:49:23.745657   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.745664   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:23.745669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:23.745729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:23.784631   58316 cri.go:89] found id: ""
	I0520 11:49:23.784659   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.784671   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:23.784678   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:23.784742   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:23.819884   58316 cri.go:89] found id: ""
	I0520 11:49:23.819911   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.819918   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:23.819924   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:23.819976   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:23.857396   58316 cri.go:89] found id: ""
	I0520 11:49:23.857427   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.857436   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:23.857444   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:23.857455   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:23.931405   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:23.931428   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:23.931445   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:24.004717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:24.004752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:24.047465   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:24.047499   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:24.098918   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:24.098950   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:26.613820   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:26.627313   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:26.627386   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:26.661418   58316 cri.go:89] found id: ""
	I0520 11:49:26.661450   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.661460   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:26.661467   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:26.661532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:22.057526   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:24.554983   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.864449   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:26.363217   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:28.384733   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:30.884703   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:26.697938   58316 cri.go:89] found id: ""
	I0520 11:49:26.699437   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.699450   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:26.699456   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:26.699506   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:26.735017   58316 cri.go:89] found id: ""
	I0520 11:49:26.735053   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.735062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:26.735066   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:26.735115   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:26.771318   58316 cri.go:89] found id: ""
	I0520 11:49:26.771345   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.771355   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:26.771360   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:26.771410   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:26.806365   58316 cri.go:89] found id: ""
	I0520 11:49:26.806396   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.806408   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:26.806415   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:26.806468   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:26.844289   58316 cri.go:89] found id: ""
	I0520 11:49:26.844313   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.844321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:26.844327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:26.844373   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:26.883496   58316 cri.go:89] found id: ""
	I0520 11:49:26.883523   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.883534   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:26.883541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:26.883600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:26.918427   58316 cri.go:89] found id: ""
	I0520 11:49:26.918451   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.918458   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:26.918465   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:26.918477   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:26.986423   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:26.986444   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:26.986457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:27.066909   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:27.066938   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.110874   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:27.110911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:27.167481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:27.167520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:29.682559   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:29.695460   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:29.695520   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:29.731189   58316 cri.go:89] found id: ""
	I0520 11:49:29.731218   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.731230   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:29.731238   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:29.731297   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:29.766067   58316 cri.go:89] found id: ""
	I0520 11:49:29.766109   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.766120   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:29.766131   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:29.766200   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:29.800840   58316 cri.go:89] found id: ""
	I0520 11:49:29.800868   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.800879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:29.800886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:29.800959   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:29.838971   58316 cri.go:89] found id: ""
	I0520 11:49:29.838999   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.839009   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:29.839017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:29.839078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:29.873954   58316 cri.go:89] found id: ""
	I0520 11:49:29.873977   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.873987   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:29.873994   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:29.874051   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:29.912574   58316 cri.go:89] found id: ""
	I0520 11:49:29.912602   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.912612   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:29.912620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:29.912687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:29.945949   58316 cri.go:89] found id: ""
	I0520 11:49:29.945980   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.945989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:29.945995   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:29.946043   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:29.979804   58316 cri.go:89] found id: ""
	I0520 11:49:29.979830   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.979847   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:29.979856   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:29.979868   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:30.027811   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:30.027844   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:30.043082   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:30.043115   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:30.114796   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:30.114827   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:30.114842   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:30.196523   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:30.196561   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.056582   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:29.057033   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:31.554824   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:28.862736   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:31.362469   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:32.885757   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.385655   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:32.737682   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:32.750903   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:32.750979   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:32.786874   58316 cri.go:89] found id: ""
	I0520 11:49:32.786897   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.786907   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:32.786915   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:32.786968   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:32.821758   58316 cri.go:89] found id: ""
	I0520 11:49:32.821783   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.821794   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:32.821800   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:32.821855   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:32.857680   58316 cri.go:89] found id: ""
	I0520 11:49:32.857700   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.857707   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:32.857712   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:32.857756   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:32.897970   58316 cri.go:89] found id: ""
	I0520 11:49:32.897990   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.897999   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:32.898004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:32.898059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:32.932971   58316 cri.go:89] found id: ""
	I0520 11:49:32.932999   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.933007   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:32.933015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:32.933067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:32.968103   58316 cri.go:89] found id: ""
	I0520 11:49:32.968132   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.968140   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:32.968147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:32.968199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:33.006101   58316 cri.go:89] found id: ""
	I0520 11:49:33.006125   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.006132   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:33.006137   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:33.006184   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:33.043611   58316 cri.go:89] found id: ""
	I0520 11:49:33.043639   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.043650   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:33.043659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:33.043677   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:33.119735   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.119758   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:33.119773   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:33.199361   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:33.199408   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:33.263421   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:33.263452   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:33.315644   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:33.315674   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:35.831034   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:35.845272   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:35.845343   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:35.882951   58316 cri.go:89] found id: ""
	I0520 11:49:35.882980   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.882988   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:35.882994   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:35.883042   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:35.919528   58316 cri.go:89] found id: ""
	I0520 11:49:35.919555   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.919564   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:35.919569   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:35.919618   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:35.957880   58316 cri.go:89] found id: ""
	I0520 11:49:35.957904   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.957912   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:35.957918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:35.957964   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:35.995267   58316 cri.go:89] found id: ""
	I0520 11:49:35.995291   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.995299   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:35.995308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:35.995367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:36.032979   58316 cri.go:89] found id: ""
	I0520 11:49:36.033011   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.033022   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:36.033030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:36.033092   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:36.071270   58316 cri.go:89] found id: ""
	I0520 11:49:36.071293   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.071303   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:36.071311   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:36.071434   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:36.105983   58316 cri.go:89] found id: ""
	I0520 11:49:36.106014   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.106024   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:36.106032   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:36.106094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:36.142978   58316 cri.go:89] found id: ""
	I0520 11:49:36.143006   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.143026   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:36.143037   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:36.143061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:36.227019   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:36.227055   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:36.269608   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:36.269646   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:36.321974   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:36.322009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:36.339332   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:36.339361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:36.417118   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.555033   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.555749   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:33.363377   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.863335   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:37.885668   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.385866   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:38.918143   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:38.933253   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:38.933325   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:38.988894   58316 cri.go:89] found id: ""
	I0520 11:49:38.988940   58316 logs.go:276] 0 containers: []
	W0520 11:49:38.988952   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:38.988961   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:38.989030   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:39.025902   58316 cri.go:89] found id: ""
	I0520 11:49:39.025928   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.025938   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:39.025945   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:39.026007   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:39.061920   58316 cri.go:89] found id: ""
	I0520 11:49:39.061945   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.061954   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:39.061961   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:39.062018   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:39.098125   58316 cri.go:89] found id: ""
	I0520 11:49:39.098152   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.098162   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:39.098169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:39.098229   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:39.139286   58316 cri.go:89] found id: ""
	I0520 11:49:39.139318   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.139328   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:39.139335   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:39.139404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:39.174437   58316 cri.go:89] found id: ""
	I0520 11:49:39.174468   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.174477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:39.174485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:39.174551   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:39.216176   58316 cri.go:89] found id: ""
	I0520 11:49:39.216211   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.216221   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:39.216228   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:39.216290   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:39.254213   58316 cri.go:89] found id: ""
	I0520 11:49:39.254235   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.254243   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:39.254252   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:39.254263   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:39.334325   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:39.334350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:39.334367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:39.411799   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:39.411830   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:39.455093   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:39.455118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:39.507557   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:39.507594   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:37.559287   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.055915   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:38.362779   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.862212   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.862459   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.385950   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:44.387966   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.021921   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:42.035757   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:42.035829   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:42.072072   58316 cri.go:89] found id: ""
	I0520 11:49:42.072103   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.072112   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:42.072118   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:42.072180   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:42.108038   58316 cri.go:89] found id: ""
	I0520 11:49:42.108061   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.108068   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:42.108074   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:42.108130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:42.146464   58316 cri.go:89] found id: ""
	I0520 11:49:42.146492   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.146503   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:42.146510   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:42.146578   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:42.182566   58316 cri.go:89] found id: ""
	I0520 11:49:42.182598   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.182608   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:42.182615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:42.182688   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:42.220159   58316 cri.go:89] found id: ""
	I0520 11:49:42.220191   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.220201   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:42.220209   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:42.220268   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:42.255952   58316 cri.go:89] found id: ""
	I0520 11:49:42.255975   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.255982   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:42.255988   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:42.256036   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:42.294347   58316 cri.go:89] found id: ""
	I0520 11:49:42.294376   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.294387   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:42.294394   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:42.294456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:42.332019   58316 cri.go:89] found id: ""
	I0520 11:49:42.332043   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.332050   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:42.332058   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:42.332074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.408479   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:42.408520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:42.445579   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:42.445620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:42.496278   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:42.496312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:42.510258   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:42.510292   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:42.582179   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.082674   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:45.096690   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:45.096749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:45.132031   58316 cri.go:89] found id: ""
	I0520 11:49:45.132054   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.132068   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:45.132074   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:45.132126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:45.167575   58316 cri.go:89] found id: ""
	I0520 11:49:45.167601   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.167612   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:45.167619   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:45.167679   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:45.202021   58316 cri.go:89] found id: ""
	I0520 11:49:45.202047   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.202054   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:45.202059   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:45.202113   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:45.236630   58316 cri.go:89] found id: ""
	I0520 11:49:45.236659   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.236669   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:45.236677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:45.236740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:45.272509   58316 cri.go:89] found id: ""
	I0520 11:49:45.272536   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.272543   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:45.272549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:45.272606   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:45.308221   58316 cri.go:89] found id: ""
	I0520 11:49:45.308242   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.308250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:45.308256   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:45.308298   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:45.344202   58316 cri.go:89] found id: ""
	I0520 11:49:45.344226   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.344233   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:45.344239   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:45.344299   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:45.392276   58316 cri.go:89] found id: ""
	I0520 11:49:45.392304   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.392315   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:45.392325   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:45.392336   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:45.451325   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:45.451355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:45.509835   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:45.509867   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:45.523790   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:45.523815   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:45.599837   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.599867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:45.599886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.056415   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:44.555806   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:45.363614   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:47.863671   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:46.884916   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:48.885117   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:48.175148   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:48.188998   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:48.189102   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:48.225492   58316 cri.go:89] found id: ""
	I0520 11:49:48.225528   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.225538   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:48.225548   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:48.225611   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:48.259197   58316 cri.go:89] found id: ""
	I0520 11:49:48.259221   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.259231   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:48.259237   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:48.259296   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:48.298760   58316 cri.go:89] found id: ""
	I0520 11:49:48.298786   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.298796   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:48.298803   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:48.298876   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:48.339188   58316 cri.go:89] found id: ""
	I0520 11:49:48.339217   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.339227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:48.339234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:48.339302   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:48.378659   58316 cri.go:89] found id: ""
	I0520 11:49:48.378689   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.378699   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:48.378706   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:48.378763   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:48.417637   58316 cri.go:89] found id: ""
	I0520 11:49:48.417664   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.417674   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:48.417681   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:48.417757   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:48.461412   58316 cri.go:89] found id: ""
	I0520 11:49:48.461439   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.461450   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:48.461457   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:48.461521   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:48.502919   58316 cri.go:89] found id: ""
	I0520 11:49:48.502942   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.502949   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:48.502958   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:48.502974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:48.557007   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:48.557037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:48.571457   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:48.571485   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:48.642339   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:48.642362   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:48.642379   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:48.720711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:48.720750   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:51.262685   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:51.276276   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:51.276352   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:51.313139   58316 cri.go:89] found id: ""
	I0520 11:49:51.313169   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.313178   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:51.313184   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:51.313233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:51.349742   58316 cri.go:89] found id: ""
	I0520 11:49:51.349771   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.349781   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:51.349789   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:51.349847   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:51.388819   58316 cri.go:89] found id: ""
	I0520 11:49:51.388856   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.388868   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:51.388875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:51.388958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:51.424734   58316 cri.go:89] found id: ""
	I0520 11:49:51.424769   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.424779   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:51.424789   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:51.424867   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:51.464135   58316 cri.go:89] found id: ""
	I0520 11:49:51.464163   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.464172   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:51.464178   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:51.464241   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:51.505948   58316 cri.go:89] found id: ""
	I0520 11:49:51.505974   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.505984   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:51.505992   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:51.506057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:51.542350   58316 cri.go:89] found id: ""
	I0520 11:49:51.542378   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.542389   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:51.542397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:51.542461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:51.577991   58316 cri.go:89] found id: ""
	I0520 11:49:51.578022   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.578031   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:51.578043   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:51.578058   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:51.630392   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:51.630427   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:51.645173   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:51.645198   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:49:47.056668   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:49.554453   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:51.555691   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:50.363307   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:52.862785   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:51.386280   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:53.386860   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:55.886047   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	W0520 11:49:51.715271   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:51.715297   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:51.715312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:51.798792   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:51.798831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.343631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:54.358716   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:54.358791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:54.394117   58316 cri.go:89] found id: ""
	I0520 11:49:54.394145   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.394154   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:54.394162   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:54.394219   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:54.435611   58316 cri.go:89] found id: ""
	I0520 11:49:54.435636   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.435644   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:54.435652   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:54.435711   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:54.469438   58316 cri.go:89] found id: ""
	I0520 11:49:54.469464   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.469474   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:54.469482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:54.469544   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:54.508113   58316 cri.go:89] found id: ""
	I0520 11:49:54.508143   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.508154   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:54.508161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:54.508223   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:54.544504   58316 cri.go:89] found id: ""
	I0520 11:49:54.544528   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.544536   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:54.544541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:54.544598   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:54.584691   58316 cri.go:89] found id: ""
	I0520 11:49:54.584725   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.584735   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:54.584743   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:54.584804   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:54.618505   58316 cri.go:89] found id: ""
	I0520 11:49:54.618525   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.618532   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:54.618538   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:54.618589   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:54.652670   58316 cri.go:89] found id: ""
	I0520 11:49:54.652701   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.652711   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:54.652721   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:54.652741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.692782   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:54.692808   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:54.746300   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:54.746340   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:54.760324   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:54.760357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:54.835091   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:54.835118   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:54.835130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:54.055131   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:56.055386   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:54.863545   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:57.363023   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:58.385457   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:00.385971   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:57.422176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:57.437369   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:57.437429   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:57.472083   58316 cri.go:89] found id: ""
	I0520 11:49:57.472119   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.472130   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:57.472137   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:57.472197   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:57.507307   58316 cri.go:89] found id: ""
	I0520 11:49:57.507332   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.507339   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:57.507345   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:57.507405   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:57.548112   58316 cri.go:89] found id: ""
	I0520 11:49:57.548144   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.548155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:57.548162   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:57.548227   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:57.585751   58316 cri.go:89] found id: ""
	I0520 11:49:57.585790   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.585801   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:57.585809   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:57.585879   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:57.626577   58316 cri.go:89] found id: ""
	I0520 11:49:57.626609   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.626620   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:57.626628   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:57.626687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:57.662402   58316 cri.go:89] found id: ""
	I0520 11:49:57.662429   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.662436   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:57.662441   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:57.662503   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:57.695393   58316 cri.go:89] found id: ""
	I0520 11:49:57.695417   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.695424   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:57.695429   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:57.695487   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:57.730011   58316 cri.go:89] found id: ""
	I0520 11:49:57.730039   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.730049   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:57.730059   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:57.730070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:57.807727   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:57.807763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:57.853562   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:57.853595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:57.909322   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:57.909357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:57.923277   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:57.923305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:57.994858   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.495176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:00.508821   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:00.508889   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:00.543001   58316 cri.go:89] found id: ""
	I0520 11:50:00.543028   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.543045   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:00.543057   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:00.543117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:00.582547   58316 cri.go:89] found id: ""
	I0520 11:50:00.582574   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.582590   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:00.582598   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:00.582659   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:00.617962   58316 cri.go:89] found id: ""
	I0520 11:50:00.617993   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.618003   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:00.618011   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:00.618073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:00.655065   58316 cri.go:89] found id: ""
	I0520 11:50:00.655099   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.655110   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:00.655117   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:00.655177   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:00.689497   58316 cri.go:89] found id: ""
	I0520 11:50:00.689521   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.689528   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:00.689533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:00.689586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:00.723710   58316 cri.go:89] found id: ""
	I0520 11:50:00.723743   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.723754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:00.723761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:00.723821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:00.757900   58316 cri.go:89] found id: ""
	I0520 11:50:00.757943   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.757950   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:00.757956   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:00.758014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:00.792196   58316 cri.go:89] found id: ""
	I0520 11:50:00.792223   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.792232   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:00.792242   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:00.792257   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:00.845759   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:00.845791   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:00.860633   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:00.860658   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:00.936030   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.936050   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:00.936061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:01.011562   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:01.011602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:58.554785   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:00.555726   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:59.365132   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:01.863538   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:02.884908   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:05.384186   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:03.554533   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:03.568017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:03.568086   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:03.605583   58316 cri.go:89] found id: ""
	I0520 11:50:03.605610   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.605619   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:03.605626   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:03.605683   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.640875   58316 cri.go:89] found id: ""
	I0520 11:50:03.640905   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.640915   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:03.640939   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:03.641004   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:03.676064   58316 cri.go:89] found id: ""
	I0520 11:50:03.676088   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.676096   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:03.676103   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:03.676155   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:03.710791   58316 cri.go:89] found id: ""
	I0520 11:50:03.710815   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.710823   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:03.710828   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:03.710883   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:03.746348   58316 cri.go:89] found id: ""
	I0520 11:50:03.746375   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.746386   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:03.746392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:03.746451   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:03.782220   58316 cri.go:89] found id: ""
	I0520 11:50:03.782244   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.782253   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:03.782259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:03.782319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:03.819678   58316 cri.go:89] found id: ""
	I0520 11:50:03.819712   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.819724   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:03.819731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:03.819789   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:03.856137   58316 cri.go:89] found id: ""
	I0520 11:50:03.856163   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.856172   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:03.856180   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:03.856191   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:03.914360   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:03.914391   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:03.929401   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:03.929431   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:03.999763   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:03.999882   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:03.999905   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:04.076921   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:04.076971   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:06.623577   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:06.636265   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:06.636329   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:06.670653   58316 cri.go:89] found id: ""
	I0520 11:50:06.670682   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.670692   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:06.670698   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:06.670790   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.054799   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:05.056653   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:03.865619   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:06.362355   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:07.387760   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:09.885607   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:06.703498   58316 cri.go:89] found id: ""
	I0520 11:50:06.703528   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.703537   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:06.703543   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:06.703600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:06.736579   58316 cri.go:89] found id: ""
	I0520 11:50:06.736606   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.736615   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:06.736620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:06.736687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:06.774584   58316 cri.go:89] found id: ""
	I0520 11:50:06.774608   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.774615   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:06.774625   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:06.774682   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:06.810565   58316 cri.go:89] found id: ""
	I0520 11:50:06.810591   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.810599   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:06.810604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:06.810666   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:06.845063   58316 cri.go:89] found id: ""
	I0520 11:50:06.845101   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.845113   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:06.845121   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:06.845208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:06.884560   58316 cri.go:89] found id: ""
	I0520 11:50:06.884585   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.884592   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:06.884597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:06.884650   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:06.923422   58316 cri.go:89] found id: ""
	I0520 11:50:06.923450   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.923461   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:06.923474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:06.923490   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:06.937831   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:06.937856   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:07.008877   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.008897   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:07.008907   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:07.091174   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:07.091208   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:07.128889   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:07.128922   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:09.686881   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:09.700771   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:09.700842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:09.737129   58316 cri.go:89] found id: ""
	I0520 11:50:09.737156   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.737166   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:09.737172   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:09.737233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:09.772130   58316 cri.go:89] found id: ""
	I0520 11:50:09.772164   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.772175   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:09.772182   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:09.772251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:09.811033   58316 cri.go:89] found id: ""
	I0520 11:50:09.811059   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.811066   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:09.811072   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:09.811127   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:09.849809   58316 cri.go:89] found id: ""
	I0520 11:50:09.849833   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.849841   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:09.849849   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:09.849911   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:09.888216   58316 cri.go:89] found id: ""
	I0520 11:50:09.888241   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.888251   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:09.888258   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:09.888319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:09.925704   58316 cri.go:89] found id: ""
	I0520 11:50:09.925733   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.925742   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:09.925749   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:09.925807   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:09.961823   58316 cri.go:89] found id: ""
	I0520 11:50:09.961846   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.961853   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:09.961858   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:09.961913   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:09.998256   58316 cri.go:89] found id: ""
	I0520 11:50:09.998280   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.998288   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:09.998304   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:09.998319   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:10.077998   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:10.078041   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:10.116358   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:10.116383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:10.169638   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:10.169670   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:10.184056   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:10.184082   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:10.256798   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.555902   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:09.558035   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:08.366901   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:10.862545   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:12.862664   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:12.757663   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:12.772331   58316 kubeadm.go:591] duration metric: took 4m3.848225092s to restartPrimaryControlPlane
	W0520 11:50:12.772412   58316 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:50:12.772450   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:50:13.531871   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:50:13.547475   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:50:13.558690   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:50:13.568708   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:50:13.568730   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:50:13.568772   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:50:13.579016   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:50:13.579078   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:50:13.590821   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:50:13.600567   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:50:13.600614   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:50:13.610747   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.619942   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:50:13.619981   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.629831   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:50:13.638940   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:50:13.638991   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:50:13.648962   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:50:13.714817   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:50:13.714873   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:50:13.872737   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:50:13.872919   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:50:13.873055   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:50:14.078057   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:50:14.080202   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:50:14.080339   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:50:14.080431   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:50:14.080563   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:50:14.080660   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:50:14.080763   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:50:14.080842   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:50:14.081017   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:50:14.081620   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:50:14.081988   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:50:14.082494   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:50:14.082552   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:50:14.082602   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:50:14.157464   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:50:14.556498   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:50:14.754304   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:50:15.061126   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:50:15.079978   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:50:15.081069   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:50:15.081248   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:50:15.222910   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:50:12.385235   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.886286   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:15.225078   58316 out.go:204]   - Booting up control plane ...
	I0520 11:50:15.225186   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:50:15.232621   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:50:15.233678   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:50:15.234394   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:50:15.236632   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:50:12.055821   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.556030   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.865340   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.362603   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.388804   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.885030   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.056232   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.555242   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.555339   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.363446   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.861989   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.886042   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:24.384353   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:23.555974   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:26.054899   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:23.862506   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:25.862952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:26.384558   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.384818   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.387241   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.055327   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.055628   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.365187   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.862199   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.863892   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.885393   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.384549   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.555087   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.055509   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.362103   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.362430   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.386206   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.386258   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.055828   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.554574   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.554821   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.862750   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.862973   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.885277   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:43.886162   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:43.555316   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:45.556417   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:44.364091   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:46.364286   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:46.385011   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.385754   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:50.387956   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.055237   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:50.555567   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.362404   58398 pod_ready.go:81] duration metric: took 4m0.006145654s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	E0520 11:50:48.362424   58398 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:50:48.362431   58398 pod_ready.go:38] duration metric: took 4m3.605442277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:50:48.362444   58398 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:50:48.362469   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:48.362528   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:48.422343   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:48.422361   58398 cri.go:89] found id: ""
	I0520 11:50:48.422369   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:48.422428   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.427448   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:48.427512   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:48.477327   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:48.477353   58398 cri.go:89] found id: ""
	I0520 11:50:48.477361   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:48.477418   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.482113   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:48.482161   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:48.525019   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:48.525039   58398 cri.go:89] found id: ""
	I0520 11:50:48.525046   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:48.525087   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.529544   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:48.529616   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:48.571031   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:48.571055   58398 cri.go:89] found id: ""
	I0520 11:50:48.571065   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:48.571117   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.575601   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:48.575653   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:48.617086   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:48.617111   58398 cri.go:89] found id: ""
	I0520 11:50:48.617120   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:48.617174   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.623512   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:48.623585   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:48.660141   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:48.660170   58398 cri.go:89] found id: ""
	I0520 11:50:48.660179   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:48.660236   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.665578   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:48.665648   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:48.704366   58398 cri.go:89] found id: ""
	I0520 11:50:48.704398   58398 logs.go:276] 0 containers: []
	W0520 11:50:48.704408   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:48.704418   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:48.704482   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:48.745897   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:48.745925   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:48.745931   58398 cri.go:89] found id: ""
	I0520 11:50:48.745940   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:48.745997   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.750982   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.755064   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:48.755084   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:48.804692   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:48.804720   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:48.862735   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:48.862765   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:48.914854   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:48.914885   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:48.930449   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:48.930473   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:48.980371   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:48.980398   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:49.019847   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:49.019871   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:49.066103   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:49.066131   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:49.102912   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:49.102938   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:49.148380   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:49.148403   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:49.184436   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:49.184466   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:49.658226   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:49.658267   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:49.715674   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:49.715724   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:52.360868   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:52.378573   58398 api_server.go:72] duration metric: took 4m14.818629715s to wait for apiserver process to appear ...
	I0520 11:50:52.378595   58398 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:50:52.378634   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:52.378694   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:52.423033   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:52.423057   58398 cri.go:89] found id: ""
	I0520 11:50:52.423067   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:52.423124   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.427361   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:52.427422   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:52.472357   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:52.472383   58398 cri.go:89] found id: ""
	I0520 11:50:52.472393   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:52.472452   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.477472   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:52.477536   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:52.517721   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:52.517742   58398 cri.go:89] found id: ""
	I0520 11:50:52.517750   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:52.517806   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.522347   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:52.522415   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:52.562720   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:52.562747   58398 cri.go:89] found id: ""
	I0520 11:50:52.562757   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:52.562815   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.567322   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:52.567378   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:52.605752   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:52.605779   58398 cri.go:89] found id: ""
	I0520 11:50:52.605787   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:52.605841   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.609932   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:52.609996   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:52.646704   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:52.646734   58398 cri.go:89] found id: ""
	I0520 11:50:52.646745   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:52.646805   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.651422   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:52.651493   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:52.690880   58398 cri.go:89] found id: ""
	I0520 11:50:52.690909   58398 logs.go:276] 0 containers: []
	W0520 11:50:52.690919   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:52.690927   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:52.690997   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:52.733525   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:52.733558   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:52.733562   58398 cri.go:89] found id: ""
	I0520 11:50:52.733568   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:52.733615   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.738004   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.743086   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:52.743108   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:52.795042   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:52.795078   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:52.809356   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:52.809384   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:52.854486   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:52.854517   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:52.904266   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:52.904289   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:52.941257   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:52.941285   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:52.886142   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.385782   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.237878   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:50:55.238408   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:50:55.238664   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:50:53.057541   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.554909   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:52.978583   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:52.978613   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:53.081246   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:53.081276   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:53.124472   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:53.124509   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:53.167027   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:53.167055   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:53.223091   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:53.223125   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:53.261390   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:53.261417   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:53.680820   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:53.680877   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:56.227910   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:50:56.233856   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I0520 11:50:56.234827   58398 api_server.go:141] control plane version: v1.30.1
	I0520 11:50:56.234852   58398 api_server.go:131] duration metric: took 3.856250847s to wait for apiserver health ...
	I0520 11:50:56.234859   58398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:50:56.234880   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:56.234923   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:56.273520   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:56.273539   58398 cri.go:89] found id: ""
	I0520 11:50:56.273545   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:56.273589   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.278121   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:56.278181   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:56.311934   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:56.311958   58398 cri.go:89] found id: ""
	I0520 11:50:56.311966   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:56.312011   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.315972   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:56.316033   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:56.351237   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:56.351259   58398 cri.go:89] found id: ""
	I0520 11:50:56.351268   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:56.351323   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.355480   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:56.355528   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:56.389768   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:56.389782   58398 cri.go:89] found id: ""
	I0520 11:50:56.389789   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:56.389829   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.393815   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:56.393869   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:56.436557   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:56.436582   58398 cri.go:89] found id: ""
	I0520 11:50:56.436592   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:56.436637   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.440866   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:56.440943   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:56.480688   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:56.480734   58398 cri.go:89] found id: ""
	I0520 11:50:56.480744   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:56.480814   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.485880   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:56.485974   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:56.521965   58398 cri.go:89] found id: ""
	I0520 11:50:56.521992   58398 logs.go:276] 0 containers: []
	W0520 11:50:56.521999   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:56.522005   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:56.522063   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:56.567585   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:56.567611   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:56.567617   58398 cri.go:89] found id: ""
	I0520 11:50:56.567627   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:56.567677   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.572972   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.577459   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:56.577484   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:56.636435   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:56.636467   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:56.675690   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:56.675714   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:56.720035   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:56.720065   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:56.734275   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:56.734302   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:56.837237   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:56.837268   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:56.885482   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:56.885517   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:56.922276   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:56.922302   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:56.971119   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:56.971148   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:57.010383   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:57.010414   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:57.044880   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:57.044909   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:57.097906   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:57.097936   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:57.147600   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:57.147626   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:59.998576   58398 system_pods.go:59] 8 kube-system pods found
	I0520 11:50:59.998601   58398 system_pods.go:61] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running
	I0520 11:50:59.998605   58398 system_pods.go:61] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running
	I0520 11:50:59.998609   58398 system_pods.go:61] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running
	I0520 11:50:59.998613   58398 system_pods.go:61] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running
	I0520 11:50:59.998617   58398 system_pods.go:61] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running
	I0520 11:50:59.998621   58398 system_pods.go:61] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running
	I0520 11:50:59.998627   58398 system_pods.go:61] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:50:59.998631   58398 system_pods.go:61] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running
	I0520 11:50:59.998639   58398 system_pods.go:74] duration metric: took 3.763773626s to wait for pod list to return data ...
	I0520 11:50:59.998645   58398 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:51:00.001599   58398 default_sa.go:45] found service account: "default"
	I0520 11:51:00.001619   58398 default_sa.go:55] duration metric: took 2.967907ms for default service account to be created ...
	I0520 11:51:00.001626   58398 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:51:00.007124   58398 system_pods.go:86] 8 kube-system pods found
	I0520 11:51:00.007151   58398 system_pods.go:89] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running
	I0520 11:51:00.007156   58398 system_pods.go:89] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running
	I0520 11:51:00.007161   58398 system_pods.go:89] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running
	I0520 11:51:00.007168   58398 system_pods.go:89] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running
	I0520 11:51:00.007175   58398 system_pods.go:89] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running
	I0520 11:51:00.007182   58398 system_pods.go:89] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running
	I0520 11:51:00.007194   58398 system_pods.go:89] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:00.007205   58398 system_pods.go:89] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running
	I0520 11:51:00.007215   58398 system_pods.go:126] duration metric: took 5.582307ms to wait for k8s-apps to be running ...
	I0520 11:51:00.007226   58398 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:51:00.007281   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:51:00.023780   58398 system_svc.go:56] duration metric: took 16.545966ms WaitForService to wait for kubelet
	I0520 11:51:00.023806   58398 kubeadm.go:576] duration metric: took 4m22.463864802s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:51:00.023830   58398 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:51:00.026757   58398 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:51:00.026775   58398 node_conditions.go:123] node cpu capacity is 2
	I0520 11:51:00.026788   58398 node_conditions.go:105] duration metric: took 2.951465ms to run NodePressure ...
	I0520 11:51:00.026798   58398 start.go:240] waiting for startup goroutines ...
	I0520 11:51:00.026804   58398 start.go:245] waiting for cluster config update ...
	I0520 11:51:00.026814   58398 start.go:254] writing updated cluster config ...
	I0520 11:51:00.027094   58398 ssh_runner.go:195] Run: rm -f paused
	I0520 11:51:00.078104   58398 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:51:00.079923   58398 out.go:177] * Done! kubectl is now configured to use "embed-certs-274976" cluster and "default" namespace by default
	I0520 11:50:57.885337   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:00.385874   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:00.239814   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:00.240101   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:50:57.554979   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:59.555797   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:02.886892   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:05.384747   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:02.060543   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:04.555457   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:06.556619   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:07.386407   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:09.885284   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:10.240955   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:10.241197   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:09.055313   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:11.056015   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:11.886887   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:13.887204   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:15.888591   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:12.055930   58840 pod_ready.go:81] duration metric: took 4m0.00701193s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	E0520 11:51:12.055967   58840 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:51:12.055978   58840 pod_ready.go:38] duration metric: took 4m5.378750595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:51:12.055998   58840 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:51:12.056034   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:12.056101   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:12.115475   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:12.115503   58840 cri.go:89] found id: ""
	I0520 11:51:12.115513   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:12.115568   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.120850   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:12.120940   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:12.161022   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:12.161055   58840 cri.go:89] found id: ""
	I0520 11:51:12.161062   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:12.161130   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.165525   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:12.165586   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:12.211862   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:12.211890   58840 cri.go:89] found id: ""
	I0520 11:51:12.211899   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:12.211959   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.217037   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:12.217114   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:12.256985   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:12.257010   58840 cri.go:89] found id: ""
	I0520 11:51:12.257018   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:12.257064   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.261765   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:12.261825   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:12.299745   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:12.299773   58840 cri.go:89] found id: ""
	I0520 11:51:12.299781   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:12.299842   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.307681   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:12.307752   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:12.346729   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:12.346754   58840 cri.go:89] found id: ""
	I0520 11:51:12.346763   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:12.346819   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.352034   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:12.352096   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:12.394387   58840 cri.go:89] found id: ""
	I0520 11:51:12.394405   58840 logs.go:276] 0 containers: []
	W0520 11:51:12.394412   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:12.394416   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:12.394460   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:12.434198   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:12.434224   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:12.434229   58840 cri.go:89] found id: ""
	I0520 11:51:12.434238   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:12.434298   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.439424   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.444096   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:12.444113   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:12.501635   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:12.501668   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:12.516529   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:12.516557   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:12.662690   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:12.662723   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:12.724415   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:12.724455   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:12.767417   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:12.767446   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:12.815739   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:12.815775   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:12.860515   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:12.860543   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:12.900348   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:12.900377   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:12.948489   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:12.948518   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:12.996999   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:12.997030   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:13.034866   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:13.034896   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:13.532828   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:13.532875   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:16.083150   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:51:16.106110   58840 api_server.go:72] duration metric: took 4m16.64390235s to wait for apiserver process to appear ...
	I0520 11:51:16.106155   58840 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:51:16.106198   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:16.106262   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:16.148418   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:16.148450   58840 cri.go:89] found id: ""
	I0520 11:51:16.148459   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:16.148527   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.153400   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:16.153456   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:16.196145   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:16.196167   58840 cri.go:89] found id: ""
	I0520 11:51:16.196175   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:16.196231   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.200792   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:16.200857   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:16.244411   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:16.244435   58840 cri.go:89] found id: ""
	I0520 11:51:16.244446   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:16.244497   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.249637   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:16.249707   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:16.288520   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:16.288542   58840 cri.go:89] found id: ""
	I0520 11:51:16.288550   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:16.288613   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.293307   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:16.293377   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:16.336963   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:16.336988   58840 cri.go:89] found id: ""
	I0520 11:51:16.336998   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:16.337053   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.342155   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:16.342236   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:16.390589   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:16.390610   58840 cri.go:89] found id: ""
	I0520 11:51:16.390617   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:16.390672   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.395319   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:16.395367   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:16.444947   58840 cri.go:89] found id: ""
	I0520 11:51:16.444974   58840 logs.go:276] 0 containers: []
	W0520 11:51:16.444985   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:16.445005   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:16.445069   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:16.485389   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:16.485414   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:16.485418   58840 cri.go:89] found id: ""
	I0520 11:51:16.485425   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:16.485481   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.490036   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.494650   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:16.494678   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:16.510820   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:16.510847   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:16.549583   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:16.549616   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:16.593009   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:16.593036   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:16.644523   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:16.644551   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:16.686326   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:16.686355   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:16.753308   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:16.753339   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:16.799568   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:16.799600   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:16.858422   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:16.858454   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:18.385869   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:20.386467   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:16.983556   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:16.983591   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:17.034851   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:17.034885   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:17.078220   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:17.078247   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:17.120946   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:17.120979   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:20.055408   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:51:20.059619   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 200:
	ok
	I0520 11:51:20.060649   58840 api_server.go:141] control plane version: v1.30.1
	I0520 11:51:20.060671   58840 api_server.go:131] duration metric: took 3.954509172s to wait for apiserver health ...
	I0520 11:51:20.060679   58840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:51:20.060698   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:20.060745   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:20.099666   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:20.099687   58840 cri.go:89] found id: ""
	I0520 11:51:20.099695   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:20.099746   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.104235   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:20.104286   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:20.162860   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:20.162881   58840 cri.go:89] found id: ""
	I0520 11:51:20.162887   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:20.162936   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.168215   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:20.168267   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:20.208871   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:20.208900   58840 cri.go:89] found id: ""
	I0520 11:51:20.208909   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:20.208982   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.213637   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:20.213717   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:20.253194   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:20.253219   58840 cri.go:89] found id: ""
	I0520 11:51:20.253230   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:20.253286   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.258190   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:20.258261   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:20.299693   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:20.299713   58840 cri.go:89] found id: ""
	I0520 11:51:20.299720   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:20.299779   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.308104   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:20.308165   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:20.348954   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:20.348977   58840 cri.go:89] found id: ""
	I0520 11:51:20.348985   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:20.349030   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.353738   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:20.353798   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:20.388466   58840 cri.go:89] found id: ""
	I0520 11:51:20.388486   58840 logs.go:276] 0 containers: []
	W0520 11:51:20.388496   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:20.388504   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:20.388564   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:20.430628   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:20.430653   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:20.430659   58840 cri.go:89] found id: ""
	I0520 11:51:20.430668   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:20.430714   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.435241   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.439312   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:20.439332   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:20.476361   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:20.476388   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:20.514317   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:20.514345   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:20.566557   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:20.566604   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:20.582249   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:20.582279   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:20.639530   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:20.639568   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:20.676804   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:20.676837   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:20.713661   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:20.713691   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:20.757100   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:20.757125   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:20.805824   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:20.805859   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:20.921508   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:20.921540   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:20.968752   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:20.968785   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:21.024773   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:21.024802   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:23.915504   58840 system_pods.go:59] 8 kube-system pods found
	I0520 11:51:23.915536   58840 system_pods.go:61] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running
	I0520 11:51:23.915540   58840 system_pods.go:61] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running
	I0520 11:51:23.915544   58840 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running
	I0520 11:51:23.915549   58840 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running
	I0520 11:51:23.915552   58840 system_pods.go:61] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:51:23.915555   58840 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running
	I0520 11:51:23.915562   58840 system_pods.go:61] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:23.915566   58840 system_pods.go:61] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:51:23.915574   58840 system_pods.go:74] duration metric: took 3.854889521s to wait for pod list to return data ...
	I0520 11:51:23.915582   58840 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:51:23.917651   58840 default_sa.go:45] found service account: "default"
	I0520 11:51:23.917672   58840 default_sa.go:55] duration metric: took 2.084872ms for default service account to be created ...
	I0520 11:51:23.917679   58840 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:51:23.922566   58840 system_pods.go:86] 8 kube-system pods found
	I0520 11:51:23.922589   58840 system_pods.go:89] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running
	I0520 11:51:23.922594   58840 system_pods.go:89] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running
	I0520 11:51:23.922599   58840 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running
	I0520 11:51:23.922604   58840 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running
	I0520 11:51:23.922608   58840 system_pods.go:89] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:51:23.922612   58840 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running
	I0520 11:51:23.922618   58840 system_pods.go:89] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:23.922623   58840 system_pods.go:89] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:51:23.922630   58840 system_pods.go:126] duration metric: took 4.946206ms to wait for k8s-apps to be running ...
	I0520 11:51:23.922638   58840 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:51:23.922679   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:51:23.941348   58840 system_svc.go:56] duration metric: took 18.700356ms WaitForService to wait for kubelet
	I0520 11:51:23.941388   58840 kubeadm.go:576] duration metric: took 4m24.479184818s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:51:23.941414   58840 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:51:23.945372   58840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:51:23.945400   58840 node_conditions.go:123] node cpu capacity is 2
	I0520 11:51:23.945414   58840 node_conditions.go:105] duration metric: took 3.993897ms to run NodePressure ...
	I0520 11:51:23.945428   58840 start.go:240] waiting for startup goroutines ...
	I0520 11:51:23.945437   58840 start.go:245] waiting for cluster config update ...
	I0520 11:51:23.945450   58840 start.go:254] writing updated cluster config ...
	I0520 11:51:23.945739   58840 ssh_runner.go:195] Run: rm -f paused
	I0520 11:51:23.997255   58840 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:51:23.999485   58840 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-668445" cluster and "default" namespace by default
	I0520 11:51:22.386549   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:24.885200   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:26.887178   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:29.384900   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:30.242152   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:30.242431   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:31.386214   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:33.386309   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:35.379767   57519 pod_ready.go:81] duration metric: took 4m0.000946428s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" ...
	E0520 11:51:35.379806   57519 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0520 11:51:35.379826   57519 pod_ready.go:38] duration metric: took 4m4.525892516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:51:35.379854   57519 kubeadm.go:591] duration metric: took 4m13.966188007s to restartPrimaryControlPlane
	W0520 11:51:35.379905   57519 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:51:35.379929   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:07.074203   57519 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.694253213s)
	I0520 11:52:07.074270   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:07.089871   57519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:52:07.100708   57519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:07.111260   57519 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:07.111283   57519 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:07.111333   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:07.121864   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:07.121912   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:07.131426   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:07.140637   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:07.140693   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:07.150305   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:07.160398   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:07.160464   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:07.170170   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:07.179684   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:07.179740   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:07.189036   57519 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:07.242553   57519 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 11:52:07.242606   57519 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:07.396088   57519 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:07.396232   57519 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:07.396391   57519 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:07.612882   57519 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:07.614941   57519 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:07.615033   57519 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:07.615124   57519 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:07.615249   57519 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:07.615353   57519 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:07.615465   57519 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:07.615543   57519 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:07.615641   57519 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:07.615726   57519 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:07.615815   57519 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:07.615915   57519 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:07.615970   57519 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:07.616047   57519 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:07.744190   57519 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:08.012140   57519 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 11:52:08.080751   57519 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:08.369179   57519 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:08.507095   57519 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:08.507736   57519 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:08.510271   57519 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:10.244270   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:10.244870   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:10.244909   58316 kubeadm.go:309] 
	I0520 11:52:10.245019   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:52:10.245121   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:52:10.245131   58316 kubeadm.go:309] 
	I0520 11:52:10.245213   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:52:10.245293   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:52:10.245550   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:52:10.245566   58316 kubeadm.go:309] 
	I0520 11:52:10.245819   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:52:10.245900   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:52:10.245977   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:52:10.245987   58316 kubeadm.go:309] 
	I0520 11:52:10.246241   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:52:10.246437   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:52:10.246450   58316 kubeadm.go:309] 
	I0520 11:52:10.246702   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:52:10.246915   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:52:10.247106   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:52:10.247366   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:52:10.247396   58316 kubeadm.go:309] 
	I0520 11:52:10.247656   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:10.247823   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:52:10.248110   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0520 11:52:10.248219   58316 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0520 11:52:10.248277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:10.736634   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:10.753534   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:10.766627   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:10.766653   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:10.766706   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:10.778676   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:10.778728   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:10.791073   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:10.801084   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:10.801142   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:10.811127   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.820319   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:10.820389   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.830024   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:10.839789   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:10.839850   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:10.849670   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:10.930808   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:52:10.930892   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:11.097490   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:11.097648   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:11.097813   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:11.296124   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:08.512322   57519 out.go:204]   - Booting up control plane ...
	I0520 11:52:08.512437   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:08.512530   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:08.512609   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:08.533828   57519 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:08.534843   57519 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:08.534903   57519 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:08.670084   57519 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 11:52:08.670192   57519 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 11:52:09.172058   57519 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.003696ms
	I0520 11:52:09.172181   57519 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 11:52:11.298250   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:11.298369   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:11.298456   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:11.300821   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:11.300961   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:11.301094   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:11.301178   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:11.301283   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:11.301374   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:11.301475   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:11.301589   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:11.301645   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:11.301723   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:11.441179   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:11.617104   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:11.918441   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:12.107514   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:12.132712   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:12.134501   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:12.134577   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:12.307559   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:14.174127   57519 kubeadm.go:309] [api-check] The API server is healthy after 5.002000599s
	I0520 11:52:14.189265   57519 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 11:52:14.707457   57519 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 11:52:14.731273   57519 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 11:52:14.731480   57519 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-814599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 11:52:14.746881   57519 kubeadm.go:309] [bootstrap-token] Using token: 3eyb61.mp2wh3ubkzpzouqm
	I0520 11:52:14.748381   57519 out.go:204]   - Configuring RBAC rules ...
	I0520 11:52:14.748539   57519 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 11:52:14.764426   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 11:52:14.773450   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 11:52:14.778770   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 11:52:14.782819   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 11:52:14.787255   57519 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 11:52:14.900560   57519 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 11:52:15.342284   57519 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 11:52:15.901553   57519 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 11:52:15.904310   57519 kubeadm.go:309] 
	I0520 11:52:15.904380   57519 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 11:52:15.904389   57519 kubeadm.go:309] 
	I0520 11:52:15.904507   57519 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 11:52:15.904545   57519 kubeadm.go:309] 
	I0520 11:52:15.904578   57519 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 11:52:15.904653   57519 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 11:52:15.904713   57519 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 11:52:15.904723   57519 kubeadm.go:309] 
	I0520 11:52:15.904813   57519 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 11:52:15.904824   57519 kubeadm.go:309] 
	I0520 11:52:15.904910   57519 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 11:52:15.904935   57519 kubeadm.go:309] 
	I0520 11:52:15.905012   57519 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 11:52:15.905113   57519 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 11:52:15.905218   57519 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 11:52:15.905239   57519 kubeadm.go:309] 
	I0520 11:52:15.905326   57519 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 11:52:15.905433   57519 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 11:52:15.905443   57519 kubeadm.go:309] 
	I0520 11:52:15.905534   57519 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3eyb61.mp2wh3ubkzpzouqm \
	I0520 11:52:15.905675   57519 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 11:52:15.905725   57519 kubeadm.go:309] 	--control-plane 
	I0520 11:52:15.905738   57519 kubeadm.go:309] 
	I0520 11:52:15.905851   57519 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 11:52:15.905862   57519 kubeadm.go:309] 
	I0520 11:52:15.905989   57519 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3eyb61.mp2wh3ubkzpzouqm \
	I0520 11:52:15.906131   57519 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 11:52:15.907657   57519 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:15.907690   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:52:15.907703   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:52:15.909664   57519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:52:15.911132   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:52:15.927634   57519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:52:15.946135   57519 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:52:15.946282   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:15.946298   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-814599 minikube.k8s.io/updated_at=2024_05_20T11_52_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=no-preload-814599 minikube.k8s.io/primary=true
	I0520 11:52:16.161431   57519 ops.go:34] apiserver oom_adj: -16
	I0520 11:52:16.166400   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:12.309392   58316 out.go:204]   - Booting up control plane ...
	I0520 11:52:12.309529   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:12.321163   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:12.322464   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:12.323504   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:12.326643   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:52:16.666687   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:17.167351   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:17.666800   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:18.167094   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:18.667154   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:19.167261   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:19.666780   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:20.167002   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:20.667078   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:21.167229   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:21.666511   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:22.167275   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:22.666512   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:23.166471   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:23.667393   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:24.167417   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:24.667387   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:25.167046   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:25.666728   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:26.167118   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:26.667450   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:27.166833   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:27.666776   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.167244   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.666741   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.758525   57519 kubeadm.go:1107] duration metric: took 12.81228323s to wait for elevateKubeSystemPrivileges
	W0520 11:52:28.758571   57519 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 11:52:28.758582   57519 kubeadm.go:393] duration metric: took 5m7.403869165s to StartCluster
	I0520 11:52:28.758599   57519 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:28.758679   57519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:52:28.760488   57519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:28.760723   57519 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:52:28.762410   57519 out.go:177] * Verifying Kubernetes components...
	I0520 11:52:28.760783   57519 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:52:28.760966   57519 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:52:28.763493   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:52:28.763499   57519 addons.go:69] Setting storage-provisioner=true in profile "no-preload-814599"
	I0520 11:52:28.763527   57519 addons.go:234] Setting addon storage-provisioner=true in "no-preload-814599"
	I0520 11:52:28.763528   57519 addons.go:69] Setting metrics-server=true in profile "no-preload-814599"
	W0520 11:52:28.763537   57519 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:52:28.763527   57519 addons.go:69] Setting default-storageclass=true in profile "no-preload-814599"
	I0520 11:52:28.763558   57519 addons.go:234] Setting addon metrics-server=true in "no-preload-814599"
	I0520 11:52:28.763562   57519 host.go:66] Checking if "no-preload-814599" exists ...
	W0520 11:52:28.763568   57519 addons.go:243] addon metrics-server should already be in state true
	I0520 11:52:28.763582   57519 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-814599"
	I0520 11:52:28.763589   57519 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:52:28.763934   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.763952   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.763984   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.764022   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.764047   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.764081   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.778949   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0520 11:52:28.778995   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0520 11:52:28.779481   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.779522   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.780023   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.780053   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.780159   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.780180   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.780498   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.780532   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.780680   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.780720   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0520 11:52:28.781084   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.781094   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.781127   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.781605   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.781627   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.781951   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.782608   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.782651   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.784981   57519 addons.go:234] Setting addon default-storageclass=true in "no-preload-814599"
	W0520 11:52:28.785010   57519 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:52:28.785041   57519 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:52:28.785399   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.785423   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.797463   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0520 11:52:28.797488   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I0520 11:52:28.797958   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.798050   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.798571   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.798589   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.798634   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.798647   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.799040   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.799043   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.799206   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.799250   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.802002   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.804188   57519 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:52:28.802940   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.805408   57519 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:52:28.804469   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0520 11:52:28.805363   57519 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:52:28.806586   57519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:52:28.806592   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:52:28.806601   57519 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:52:28.806607   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.806613   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.806965   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.807364   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.807375   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.807765   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.808210   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.808226   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.810837   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.810866   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811278   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.811300   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811477   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.811716   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.811753   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.811769   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811856   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.812027   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.812308   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.812494   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.812643   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.812795   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.824082   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I0520 11:52:28.824580   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.825117   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.825144   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.825492   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.825709   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.827343   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.827545   57519 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:52:28.827561   57519 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:52:28.827574   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.830623   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.831261   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.831287   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.831485   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.831625   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.831783   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.831912   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.984120   57519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:52:29.006910   57519 node_ready.go:35] waiting up to 6m0s for node "no-preload-814599" to be "Ready" ...
	I0520 11:52:29.016171   57519 node_ready.go:49] node "no-preload-814599" has status "Ready":"True"
	I0520 11:52:29.016197   57519 node_ready.go:38] duration metric: took 9.258398ms for node "no-preload-814599" to be "Ready" ...
	I0520 11:52:29.016207   57519 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:52:29.024085   57519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:29.092559   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:52:29.094377   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:52:29.094395   57519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:52:29.112741   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:52:29.133783   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:52:29.133804   57519 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:52:29.202727   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:52:29.202756   57519 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:52:29.347632   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:52:30.143576   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050979242s)
	I0520 11:52:30.143633   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.143645   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.143652   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.030877915s)
	I0520 11:52:30.143694   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.143706   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.143986   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.143960   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.144014   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144026   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.144037   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.144040   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144053   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.144061   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.144068   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.144045   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.144326   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144345   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.146044   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.146057   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.146068   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.182464   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.182492   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.182802   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.182817   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.557632   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.209948791s)
	I0520 11:52:30.557698   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.557715   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.557972   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.557992   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.557996   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.558011   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.558022   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.558252   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.558267   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.558277   57519 addons.go:470] Verifying addon metrics-server=true in "no-preload-814599"
	I0520 11:52:30.558302   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.560471   57519 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0520 11:52:30.561889   57519 addons.go:505] duration metric: took 1.801104362s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0520 11:52:31.030297   57519 pod_ready.go:102] pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace has status "Ready":"False"
	I0520 11:52:31.538984   57519 pod_ready.go:92] pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.539002   57519 pod_ready.go:81] duration metric: took 2.514888376s for pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.539011   57519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.544069   57519 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.544089   57519 pod_ready.go:81] duration metric: took 5.072004ms for pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.544097   57519 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.549299   57519 pod_ready.go:92] pod "etcd-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.549315   57519 pod_ready.go:81] duration metric: took 5.212738ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.549323   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.555210   57519 pod_ready.go:92] pod "kube-apiserver-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.555229   57519 pod_ready.go:81] duration metric: took 5.899536ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.555263   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.560438   57519 pod_ready.go:92] pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.560454   57519 pod_ready.go:81] duration metric: took 5.183454ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.560463   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wkfkc" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.928261   57519 pod_ready.go:92] pod "kube-proxy-wkfkc" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.928280   57519 pod_ready.go:81] duration metric: took 367.811223ms for pod "kube-proxy-wkfkc" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.928290   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:32.328564   57519 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:32.328589   57519 pod_ready.go:81] duration metric: took 400.292676ms for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:32.328598   57519 pod_ready.go:38] duration metric: took 3.31238177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:52:32.328612   57519 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:52:32.328672   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:52:32.346546   57519 api_server.go:72] duration metric: took 3.585793393s to wait for apiserver process to appear ...
	I0520 11:52:32.346575   57519 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:52:32.346598   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:52:32.351109   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:52:32.352044   57519 api_server.go:141] control plane version: v1.30.1
	I0520 11:52:32.352070   57519 api_server.go:131] duration metric: took 5.485989ms to wait for apiserver health ...
	I0520 11:52:32.352080   57519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:52:32.532017   57519 system_pods.go:59] 9 kube-system pods found
	I0520 11:52:32.532048   57519 system_pods.go:61] "coredns-7db6d8ff4d-8qdbk" [98bbe29a-5bd1-4568-ac29-5bf030c38f79] Running
	I0520 11:52:32.532053   57519 system_pods.go:61] "coredns-7db6d8ff4d-kbz7d" [d36efcc0-4f91-4c5d-8781-0bb5f22187ab] Running
	I0520 11:52:32.532056   57519 system_pods.go:61] "etcd-no-preload-814599" [d0685317-5464-4eb1-a81f-eca1a644f86a] Running
	I0520 11:52:32.532060   57519 system_pods.go:61] "kube-apiserver-no-preload-814599" [4a6f40fe-3c8f-4afc-a731-ccb80c5840ed] Running
	I0520 11:52:32.532064   57519 system_pods.go:61] "kube-controller-manager-no-preload-814599" [b6f3ac72-bd6f-421d-85f6-905a362be527] Running
	I0520 11:52:32.532067   57519 system_pods.go:61] "kube-proxy-wkfkc" [679b7143-7bff-47f3-ab69-cd87eec68013] Running
	I0520 11:52:32.532071   57519 system_pods.go:61] "kube-scheduler-no-preload-814599" [0e7d1cb0-2e44-4ae5-b7fd-e68f88f7fc04] Running
	I0520 11:52:32.532077   57519 system_pods.go:61] "metrics-server-569cc877fc-lggvs" [72eeb6ee-27ce-4485-b364-5c9b87283d29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:52:32.532081   57519 system_pods.go:61] "storage-provisioner" [1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b] Running
	I0520 11:52:32.532094   57519 system_pods.go:74] duration metric: took 180.008393ms to wait for pod list to return data ...
	I0520 11:52:32.532103   57519 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:52:32.729233   57519 default_sa.go:45] found service account: "default"
	I0520 11:52:32.729264   57519 default_sa.go:55] duration metric: took 197.154237ms for default service account to be created ...
	I0520 11:52:32.729278   57519 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:52:32.932686   57519 system_pods.go:86] 9 kube-system pods found
	I0520 11:52:32.932717   57519 system_pods.go:89] "coredns-7db6d8ff4d-8qdbk" [98bbe29a-5bd1-4568-ac29-5bf030c38f79] Running
	I0520 11:52:32.932764   57519 system_pods.go:89] "coredns-7db6d8ff4d-kbz7d" [d36efcc0-4f91-4c5d-8781-0bb5f22187ab] Running
	I0520 11:52:32.932777   57519 system_pods.go:89] "etcd-no-preload-814599" [d0685317-5464-4eb1-a81f-eca1a644f86a] Running
	I0520 11:52:32.932786   57519 system_pods.go:89] "kube-apiserver-no-preload-814599" [4a6f40fe-3c8f-4afc-a731-ccb80c5840ed] Running
	I0520 11:52:32.932796   57519 system_pods.go:89] "kube-controller-manager-no-preload-814599" [b6f3ac72-bd6f-421d-85f6-905a362be527] Running
	I0520 11:52:32.932805   57519 system_pods.go:89] "kube-proxy-wkfkc" [679b7143-7bff-47f3-ab69-cd87eec68013] Running
	I0520 11:52:32.932812   57519 system_pods.go:89] "kube-scheduler-no-preload-814599" [0e7d1cb0-2e44-4ae5-b7fd-e68f88f7fc04] Running
	I0520 11:52:32.932824   57519 system_pods.go:89] "metrics-server-569cc877fc-lggvs" [72eeb6ee-27ce-4485-b364-5c9b87283d29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:52:32.932835   57519 system_pods.go:89] "storage-provisioner" [1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b] Running
	I0520 11:52:32.932854   57519 system_pods.go:126] duration metric: took 203.569018ms to wait for k8s-apps to be running ...
	I0520 11:52:32.932867   57519 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:52:32.932945   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:32.948983   57519 system_svc.go:56] duration metric: took 16.10841ms WaitForService to wait for kubelet
	I0520 11:52:32.949014   57519 kubeadm.go:576] duration metric: took 4.188266242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:52:32.949035   57519 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:52:33.128478   57519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:52:33.128503   57519 node_conditions.go:123] node cpu capacity is 2
	I0520 11:52:33.128514   57519 node_conditions.go:105] duration metric: took 179.473758ms to run NodePressure ...
	I0520 11:52:33.128527   57519 start.go:240] waiting for startup goroutines ...
	I0520 11:52:33.128536   57519 start.go:245] waiting for cluster config update ...
	I0520 11:52:33.128548   57519 start.go:254] writing updated cluster config ...
	I0520 11:52:33.128860   57519 ssh_runner.go:195] Run: rm -f paused
	I0520 11:52:33.177360   57519 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:52:33.179409   57519 out.go:177] * Done! kubectl is now configured to use "no-preload-814599" cluster and "default" namespace by default
	I0520 11:52:52.328753   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:52:52.329026   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:52.329268   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:57.329722   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:57.329960   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:07.330334   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:07.330605   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:27.331241   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:27.331509   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330498   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:54:07.330756   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330768   58316 kubeadm.go:309] 
	I0520 11:54:07.330813   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:54:07.330868   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:54:07.330878   58316 kubeadm.go:309] 
	I0520 11:54:07.330924   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:54:07.330980   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:54:07.331142   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:54:07.331178   58316 kubeadm.go:309] 
	I0520 11:54:07.331298   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:54:07.331354   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:54:07.331409   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:54:07.331419   58316 kubeadm.go:309] 
	I0520 11:54:07.331564   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:54:07.331675   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:54:07.331685   58316 kubeadm.go:309] 
	I0520 11:54:07.331842   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:54:07.331971   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:54:07.332096   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:54:07.332190   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:54:07.332204   58316 kubeadm.go:309] 
	I0520 11:54:07.332486   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:54:07.332595   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:54:07.332685   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 11:54:07.332755   58316 kubeadm.go:393] duration metric: took 7m58.459127604s to StartCluster
	I0520 11:54:07.332796   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:54:07.332870   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:54:07.378602   58316 cri.go:89] found id: ""
	I0520 11:54:07.378631   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.378641   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:54:07.378648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:54:07.378714   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:54:07.415695   58316 cri.go:89] found id: ""
	I0520 11:54:07.415724   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.415732   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:54:07.415739   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:54:07.415802   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:54:07.456681   58316 cri.go:89] found id: ""
	I0520 11:54:07.456706   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.456717   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:54:07.456726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:54:07.456781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:54:07.495834   58316 cri.go:89] found id: ""
	I0520 11:54:07.495870   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.495881   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:54:07.495889   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:54:07.495949   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:54:07.533674   58316 cri.go:89] found id: ""
	I0520 11:54:07.533698   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.533705   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:54:07.533710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:54:07.533759   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:54:07.571118   58316 cri.go:89] found id: ""
	I0520 11:54:07.571141   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.571148   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:54:07.571155   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:54:07.571214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:54:07.609066   58316 cri.go:89] found id: ""
	I0520 11:54:07.609091   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.609105   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:54:07.609114   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:54:07.609176   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:54:07.653029   58316 cri.go:89] found id: ""
	I0520 11:54:07.653057   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.653068   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:54:07.653080   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:54:07.653095   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:54:07.707497   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:54:07.707533   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:54:07.728740   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:54:07.728772   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:54:07.809062   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:54:07.809088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:54:07.809104   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:54:07.911717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:54:07.911762   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0520 11:54:07.952747   58316 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0520 11:54:07.952792   58316 out.go:239] * 
	W0520 11:54:07.952855   58316 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.952881   58316 out.go:239] * 
	W0520 11:54:07.954044   58316 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:54:07.957368   58316 out.go:177] 
	W0520 11:54:07.958433   58316 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.958491   58316 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0520 11:54:07.958518   58316 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0520 11:54:07.959662   58316 out.go:177] 
	
	
	==> CRI-O <==
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.378260262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c4aaa92-5667-4194-b61d-0700fb243164 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.378588135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b10ea977d0579b15e4c976dff35f3fcab52bf2c203dac63d829d0c02b894748,PodSandboxId:3bc1c76e2967b9fa8de44519b1eb9cbfc78eb7d1ccc5bd98c4c16de3273f4782,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205950639599060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b,},Annotations:map[string]string{io.kubernetes.container.hash: 753fd492,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9ee696baddd009802bd6e05ef865d184772139e53d7564dbfd1ce3a1121e29,PodSandboxId:186a07c2b9dcefe382bbd27f96a750adcf526c549de07e8445fd7c753f204575,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205950189807669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qdbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98bbe29a-5bd1-4568-ac29-5bf030c38f79,},Annotations:map[string]string{io.kubernetes.container.hash: dc33eadc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621a853961ce5be7b31f6964d770362d83ef026be2ce991d5fbb293230f1943b,PodSandboxId:c1584e5ace282fb23966c7e3abd4d2284ebe8fa48b6a0e4fb41ce76fdadb66e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205950088941153,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbz7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
6efcc0-4f91-4c5d-8781-0bb5f22187ab,},Annotations:map[string]string{io.kubernetes.container.hash: 98581ddf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852810a24d4b89497902fc14dde5ddc04cb4cdd9b7979371a2cf1a7213fd5edb,PodSandboxId:454543b0cb4cfce5dd4d8d64d5e7412adb9adb38ce89f77793c9ef74eddfcf04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1716205949561791299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkfkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b7143-7bff-47f3-ab69-cd87eec68013,},Annotations:map[string]string{io.kubernetes.container.hash: 15dd8589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c62ae52ced8bb41a38ab19762cb1fa3542762a412bd93dc6a09811a145f8e292,PodSandboxId:a935507c5eb9bba1ea795c3b839e245fb248b37c27d2d057004a28cd3faac6e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205929646435708,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ce6754bf51d7f1357b80d45b5714b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ba469b3ff42a4f8c41b04337320a7e0c135fa756b3360fe36653a608d75d88,PodSandboxId:33d15f48f264c9fbf0c963787b156cec2212ef022f8bec3843226374633c9a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:171620592967630
4710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e26e37c75cdd5dbbc768c26af58ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970dc977aa55a0ff29055628d97843b7a29d0fcee72db7dce380c35ded7a50d6,PodSandboxId:7425c02584ccb23e557f28ec2dd86e595082d99cec925fcd8f3fe8fbb927bd33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205929615674493,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1a24551000cc8a34f7c390fab6ba2c,},Annotations:map[string]string{io.kubernetes.container.hash: a808cf68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee432c68f3b3373adaf6ef970aaeeeb5a9c31c0a3c29c4e807e441751469e1,PodSandboxId:5cceeab3a53cb296769746e8be114b3b7bba322bc8768756752cc5f9a4e3fcc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205929599029501,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 419d30f9580a9703df9b861bc34ec254,},Annotations:map[string]string{io.kubernetes.container.hash: 62c19597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c4aaa92-5667-4194-b61d-0700fb243164 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.379669693Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0b10ea977d0579b15e4c976dff35f3fcab52bf2c203dac63d829d0c02b894748,Verbose:false,}" file="otel-collector/interceptors.go:62" id=af18efd9-4f23-4d83-9a04-8bc65c4e7192 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.379845787Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0b10ea977d0579b15e4c976dff35f3fcab52bf2c203dac63d829d0c02b894748,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1716205950702283363,StartedAt:1716205950727670186,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b,},Annotations:map[string]string{io.kubernetes.container.hash: 753fd492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b/containers/storage-provisioner/7494e2ef,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b/volumes/kubernetes.io~projected/kube-api-access-d9kbc,Readonly:true,SelinuxRelabel:fals
e,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b/storage-provisioner/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=af18efd9-4f23-4d83-9a04-8bc65c4e7192 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.380419635Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ac9ee696baddd009802bd6e05ef865d184772139e53d7564dbfd1ce3a1121e29,Verbose:false,}" file="otel-collector/interceptors.go:62" id=c61b3cd8-999a-459e-82dd-ca74096b4dd4 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.380778541Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ac9ee696baddd009802bd6e05ef865d184772139e53d7564dbfd1ce3a1121e29,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1716205950300240142,StartedAt:1716205950327655871,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qdbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98bbe29a-5bd1-4568-ac29-5bf030c38f79,},Annotations:map[string]string{io.kubernetes.container.hash: dc33eadc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"co
ntainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/98bbe29a-5bd1-4568-ac29-5bf030c38f79/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/98bbe29a-5bd1-4568-ac29-5bf030c38f79/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/98bbe29a-5bd1-4568-ac29-5bf030c38f79/containers/coredns/dd1dc692,Readonly:false,SelinuxRelabel:false,Propagation
:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/98bbe29a-5bd1-4568-ac29-5bf030c38f79/volumes/kubernetes.io~projected/kube-api-access-b5hrt,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-8qdbk_98bbe29a-5bd1-4568-ac29-5bf030c38f79/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=c61b3cd8-999a-459e-82dd-ca74096b4dd4 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.381362776Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:621a853961ce5be7b31f6964d770362d83ef026be2ce991d5fbb293230f1943b,Verbose:false,}" file="otel-collector/interceptors.go:62" id=65ea99c5-35b1-4a97-8495-575661840756 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.381468304Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:621a853961ce5be7b31f6964d770362d83ef026be2ce991d5fbb293230f1943b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1716205950169451980,StartedAt:1716205950243443633,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbz7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d36efcc0-4f91-4c5d-8781-0bb5f22187ab,},Annotations:map[string]string{io.kubernetes.container.hash: 98581ddf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"co
ntainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/d36efcc0-4f91-4c5d-8781-0bb5f22187ab/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d36efcc0-4f91-4c5d-8781-0bb5f22187ab/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d36efcc0-4f91-4c5d-8781-0bb5f22187ab/containers/coredns/26b3d1e4,Readonly:false,SelinuxRelabel:false,Propagation
:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/d36efcc0-4f91-4c5d-8781-0bb5f22187ab/volumes/kubernetes.io~projected/kube-api-access-qgz9b,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-kbz7d_d36efcc0-4f91-4c5d-8781-0bb5f22187ab/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=65ea99c5-35b1-4a97-8495-575661840756 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.382187632Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:852810a24d4b89497902fc14dde5ddc04cb4cdd9b7979371a2cf1a7213fd5edb,Verbose:false,}" file="otel-collector/interceptors.go:62" id=676c86cf-b6e2-434e-985a-650c749ce9c2 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.382302066Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:852810a24d4b89497902fc14dde5ddc04cb4cdd9b7979371a2cf1a7213fd5edb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1716205949729796534,StartedAt:1716205949790322815,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkfkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b7143-7bff-47f3-ab69-cd87eec68013,},Annotations:map[string]string{io.kubernetes.container.hash: 15dd8589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/679b7143-7bff-47f3-ab69-cd87eec68013/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/679b7143-7bff-47f3-ab69-cd87eec68013/containers/kube-proxy/a29fe45a,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/
lib/kubelet/pods/679b7143-7bff-47f3-ab69-cd87eec68013/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/679b7143-7bff-47f3-ab69-cd87eec68013/volumes/kubernetes.io~projected/kube-api-access-grjhb,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-wkfkc_679b7143-7bff-47f3-ab69-cd87eec68013/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-c
ollector/interceptors.go:74" id=676c86cf-b6e2-434e-985a-650c749ce9c2 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.382668016Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c62ae52ced8bb41a38ab19762cb1fa3542762a412bd93dc6a09811a145f8e292,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1b2dda7e-c99f-40ee-92dd-b18a93d0c0df name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.383024813Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c62ae52ced8bb41a38ab19762cb1fa3542762a412bd93dc6a09811a145f8e292,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1716205929798111451,StartedAt:1716205929884900483,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ce6754bf51d7f1357b80d45b5714b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/dc6ce6754bf51d7f1357b80d45b5714b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/dc6ce6754bf51d7f1357b80d45b5714b/containers/kube-controller-manager/69509598,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE
,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-no-preload-814599_dc6ce6754bf51d7f1357b80d45b5714b/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetM
ems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1b2dda7e-c99f-40ee-92dd-b18a93d0c0df name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.383316970Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:10ba469b3ff42a4f8c41b04337320a7e0c135fa756b3360fe36653a608d75d88,Verbose:false,}" file="otel-collector/interceptors.go:62" id=330963fe-51a8-435b-8fe6-2ecd31f674ab name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.383428261Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:10ba469b3ff42a4f8c41b04337320a7e0c135fa756b3360fe36653a608d75d88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1716205929765211998,StartedAt:1716205929861661998,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e26e37c75cdd5dbbc768c26af58ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/58e26e37c75cdd5dbbc768c26af58ef5/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/58e26e37c75cdd5dbbc768c26af58ef5/containers/kube-scheduler/91549915,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-no-preload-814599_58e26e37c75cdd5dbbc768c26af58ef5/kube-scheduler/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPe
riod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=330963fe-51a8-435b-8fe6-2ecd31f674ab name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.383842945Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:970dc977aa55a0ff29055628d97843b7a29d0fcee72db7dce380c35ded7a50d6,Verbose:false,}" file="otel-collector/interceptors.go:62" id=23dc4838-6977-4422-b17c-21c49b258a16 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.383937103Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:970dc977aa55a0ff29055628d97843b7a29d0fcee72db7dce380c35ded7a50d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1716205929759717159,StartedAt:1716205929898612385,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1a24551000cc8a34f7c390fab6ba2c,},Annotations:map[string]string{io.kubernetes.container.hash: a808cf68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/ee1a24551000cc8a34f7c390fab6ba2c/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/ee1a24551000cc8a34f7c390fab6ba2c/containers/kube-apiserver/c76b0bc0,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Contain
erPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-no-preload-814599_ee1a24551000cc8a34f7c390fab6ba2c/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=23dc4838-6977-4422-b17c-21c49b258a16 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.384506829Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:40ee432c68f3b3373adaf6ef970aaeeeb5a9c31c0a3c29c4e807e441751469e1,Verbose:false,}" file="otel-collector/interceptors.go:62" id=e1bd707f-b029-4102-a1c3-5b9b56c8cf34 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.384626988Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:40ee432c68f3b3373adaf6ef970aaeeeb5a9c31c0a3c29c4e807e441751469e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1716205929663978473,StartedAt:1716205929764441877,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 419d30f9580a9703df9b861bc34ec254,},Annotations:map[string]string{io.kubernetes.container.hash: 62c19597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/419d30f9580a9703df9b861bc34ec254/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/419d30f9580a9703df9b861bc34ec254/containers/etcd/e19ee4a1,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-
no-preload-814599_419d30f9580a9703df9b861bc34ec254/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e1bd707f-b029-4102-a1c3-5b9b56c8cf34 name=/runtime.v1.RuntimeService/ContainerStatus
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.399975545Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68d6f076-9162-4047-8af6-0aa570ba2c93 name=/runtime.v1.RuntimeService/Version
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.400063285Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68d6f076-9162-4047-8af6-0aa570ba2c93 name=/runtime.v1.RuntimeService/Version
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.401463740Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cfa3cda-51a0-429c-9031-b97941c020a0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.401916420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206495401891594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cfa3cda-51a0-429c-9031-b97941c020a0 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.403043552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e41a23be-94b7-45e6-a93f-e189d4fda520 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.403119939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e41a23be-94b7-45e6-a93f-e189d4fda520 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:01:35 no-preload-814599 crio[725]: time="2024-05-20 12:01:35.403285442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b10ea977d0579b15e4c976dff35f3fcab52bf2c203dac63d829d0c02b894748,PodSandboxId:3bc1c76e2967b9fa8de44519b1eb9cbfc78eb7d1ccc5bd98c4c16de3273f4782,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205950639599060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b,},Annotations:map[string]string{io.kubernetes.container.hash: 753fd492,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9ee696baddd009802bd6e05ef865d184772139e53d7564dbfd1ce3a1121e29,PodSandboxId:186a07c2b9dcefe382bbd27f96a750adcf526c549de07e8445fd7c753f204575,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205950189807669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qdbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98bbe29a-5bd1-4568-ac29-5bf030c38f79,},Annotations:map[string]string{io.kubernetes.container.hash: dc33eadc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621a853961ce5be7b31f6964d770362d83ef026be2ce991d5fbb293230f1943b,PodSandboxId:c1584e5ace282fb23966c7e3abd4d2284ebe8fa48b6a0e4fb41ce76fdadb66e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205950088941153,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbz7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
6efcc0-4f91-4c5d-8781-0bb5f22187ab,},Annotations:map[string]string{io.kubernetes.container.hash: 98581ddf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852810a24d4b89497902fc14dde5ddc04cb4cdd9b7979371a2cf1a7213fd5edb,PodSandboxId:454543b0cb4cfce5dd4d8d64d5e7412adb9adb38ce89f77793c9ef74eddfcf04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1716205949561791299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkfkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b7143-7bff-47f3-ab69-cd87eec68013,},Annotations:map[string]string{io.kubernetes.container.hash: 15dd8589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c62ae52ced8bb41a38ab19762cb1fa3542762a412bd93dc6a09811a145f8e292,PodSandboxId:a935507c5eb9bba1ea795c3b839e245fb248b37c27d2d057004a28cd3faac6e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205929646435708,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ce6754bf51d7f1357b80d45b5714b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ba469b3ff42a4f8c41b04337320a7e0c135fa756b3360fe36653a608d75d88,PodSandboxId:33d15f48f264c9fbf0c963787b156cec2212ef022f8bec3843226374633c9a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:171620592967630
4710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e26e37c75cdd5dbbc768c26af58ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970dc977aa55a0ff29055628d97843b7a29d0fcee72db7dce380c35ded7a50d6,PodSandboxId:7425c02584ccb23e557f28ec2dd86e595082d99cec925fcd8f3fe8fbb927bd33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205929615674493,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1a24551000cc8a34f7c390fab6ba2c,},Annotations:map[string]string{io.kubernetes.container.hash: a808cf68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee432c68f3b3373adaf6ef970aaeeeb5a9c31c0a3c29c4e807e441751469e1,PodSandboxId:5cceeab3a53cb296769746e8be114b3b7bba322bc8768756752cc5f9a4e3fcc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205929599029501,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 419d30f9580a9703df9b861bc34ec254,},Annotations:map[string]string{io.kubernetes.container.hash: 62c19597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e41a23be-94b7-45e6-a93f-e189d4fda520 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0b10ea977d057       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   3bc1c76e2967b       storage-provisioner
	ac9ee696baddd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   186a07c2b9dce       coredns-7db6d8ff4d-8qdbk
	621a853961ce5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   c1584e5ace282       coredns-7db6d8ff4d-kbz7d
	852810a24d4b8       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   9 minutes ago       Running             kube-proxy                0                   454543b0cb4cf       kube-proxy-wkfkc
	10ba469b3ff42       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   9 minutes ago       Running             kube-scheduler            2                   33d15f48f264c       kube-scheduler-no-preload-814599
	c62ae52ced8bb       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   9 minutes ago       Running             kube-controller-manager   2                   a935507c5eb9b       kube-controller-manager-no-preload-814599
	970dc977aa55a       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   9 minutes ago       Running             kube-apiserver            2                   7425c02584ccb       kube-apiserver-no-preload-814599
	40ee432c68f3b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   5cceeab3a53cb       etcd-no-preload-814599
	
	
	==> coredns [621a853961ce5be7b31f6964d770362d83ef026be2ce991d5fbb293230f1943b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ac9ee696baddd009802bd6e05ef865d184772139e53d7564dbfd1ce3a1121e29] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-814599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-814599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=no-preload-814599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T11_52_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:52:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-814599
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:01:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:57:42 +0000   Mon, 20 May 2024 11:52:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:57:42 +0000   Mon, 20 May 2024 11:52:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:57:42 +0000   Mon, 20 May 2024 11:52:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:57:42 +0000   Mon, 20 May 2024 11:52:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    no-preload-814599
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2f29426d32543759e35a87fdce091d7
	  System UUID:                f2f29426-d325-4375-9e35-a87fdce091d7
	  Boot ID:                    5e6ed7bc-04cd-434c-a77d-3c174a8c5d2f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8qdbk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-kbz7d                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-no-preload-814599                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-no-preload-814599             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-no-preload-814599    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-proxy-wkfkc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-no-preload-814599             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-lggvs              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m26s (x8 over 9m27s)  kubelet          Node no-preload-814599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m26s (x8 over 9m27s)  kubelet          Node no-preload-814599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m26s (x7 over 9m27s)  kubelet          Node no-preload-814599 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node no-preload-814599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node no-preload-814599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node no-preload-814599 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node no-preload-814599 event: Registered Node no-preload-814599 in Controller
	
	
	==> dmesg <==
	[  +0.041288] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.795774] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.543045] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.643245] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 11:47] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.056955] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073880] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.205010] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.121108] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.295123] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[ +16.411629] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.069163] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.905885] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +3.470549] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.795070] kauditd_printk_skb: 37 callbacks suppressed
	[  +7.105964] kauditd_printk_skb: 35 callbacks suppressed
	[May20 11:52] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.519819] systemd-fstab-generator[4006]: Ignoring "noauto" option for root device
	[  +4.447525] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.928669] systemd-fstab-generator[4327]: Ignoring "noauto" option for root device
	[ +13.894593] systemd-fstab-generator[4538]: Ignoring "noauto" option for root device
	[  +0.114607] kauditd_printk_skb: 14 callbacks suppressed
	[May20 11:53] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [40ee432c68f3b3373adaf6ef970aaeeeb5a9c31c0a3c29c4e807e441751469e1] <==
	{"level":"info","ts":"2024-05-20T11:52:09.958855Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T11:52:09.959402Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8fbc2df34e14192d","initial-advertise-peer-urls":["https://192.168.39.185:2380"],"listen-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.185:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T11:52:09.961811Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T11:52:09.962476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d switched to configuration voters=(10357203766055541037)"}
	{"level":"info","ts":"2024-05-20T11:52:09.962708Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","added-peer-id":"8fbc2df34e14192d","added-peer-peer-urls":["https://192.168.39.185:2380"]}
	{"level":"info","ts":"2024-05-20T11:52:09.963049Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-05-20T11:52:09.963088Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-05-20T11:52:10.283822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-20T11:52:10.283882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-20T11:52:10.283915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgPreVoteResp from 8fbc2df34e14192d at term 1"}
	{"level":"info","ts":"2024-05-20T11:52:10.283928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T11:52:10.283935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgVoteResp from 8fbc2df34e14192d at term 2"}
	{"level":"info","ts":"2024-05-20T11:52:10.283942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 2"}
	{"level":"info","ts":"2024-05-20T11:52:10.283949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 2"}
	{"level":"info","ts":"2024-05-20T11:52:10.288871Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:52:10.293033Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:no-preload-814599 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T11:52:10.29382Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:52:10.304976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	{"level":"info","ts":"2024-05-20T11:52:10.305386Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:52:10.305478Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:52:10.305522Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:52:10.305534Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:52:10.318434Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T11:52:10.324828Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T11:52:10.324904Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:01:35 up 14 min,  0 users,  load average: 0.12, 0.11, 0.09
	Linux no-preload-814599 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [970dc977aa55a0ff29055628d97843b7a29d0fcee72db7dce380c35ded7a50d6] <==
	I0520 11:55:31.091075       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:57:12.417469       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:57:12.418165       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0520 11:57:13.418271       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:57:13.418382       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	W0520 11:57:13.418338       1 handler_proxy.go:93] no RequestInfo found in the context
	I0520 11:57:13.418415       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0520 11:57:13.418618       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 11:57:13.419435       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:58:13.418818       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:58:13.418933       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 11:58:13.418948       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 11:58:13.420170       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 11:58:13.420237       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 11:58:13.420243       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:00:13.419488       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:00:13.419975       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 12:00:13.420035       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:00:13.420861       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:00:13.421001       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 12:00:13.421037       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c62ae52ced8bb41a38ab19762cb1fa3542762a412bd93dc6a09811a145f8e292] <==
	I0520 11:55:58.380059       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:56:27.933877       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:56:28.388929       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:56:57.939869       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:56:58.396416       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:57:27.950130       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:57:28.406681       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:57:57.956085       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:57:58.415869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0520 11:58:27.237927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="451.792µs"
	E0520 11:58:27.961802       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:58:28.423185       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0520 11:58:39.232637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="72.071µs"
	E0520 11:58:57.966263       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:58:58.430907       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:59:27.972126       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:59:28.441365       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 11:59:57.979587       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 11:59:58.450722       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:00:27.985142       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:00:28.464276       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:00:57.990275       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:00:58.471607       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:01:27.996414       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:01:28.481149       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [852810a24d4b89497902fc14dde5ddc04cb4cdd9b7979371a2cf1a7213fd5edb] <==
	I0520 11:52:30.257188       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:52:30.453297       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	I0520 11:52:30.621879       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:52:30.621919       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:52:30.621934       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:52:30.625363       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:52:30.625646       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:52:30.626075       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:52:30.628083       1 config.go:192] "Starting service config controller"
	I0520 11:52:30.628166       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:52:30.628902       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:52:30.628993       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:52:30.630312       1 config.go:319] "Starting node config controller"
	I0520 11:52:30.630348       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:52:30.729054       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 11:52:30.729096       1 shared_informer.go:320] Caches are synced for service config
	I0520 11:52:30.730592       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [10ba469b3ff42a4f8c41b04337320a7e0c135fa756b3360fe36653a608d75d88] <==
	W0520 11:52:12.455933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 11:52:12.455966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 11:52:12.456090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:52:12.456123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 11:52:12.456247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 11:52:12.456325       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 11:52:13.346932       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:52:13.347001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 11:52:13.355335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:52:13.355383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:52:13.365679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:52:13.367285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 11:52:13.366579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:52:13.367397       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 11:52:13.376069       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 11:52:13.376173       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 11:52:13.382356       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 11:52:13.382418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 11:52:13.385383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 11:52:13.385586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 11:52:13.387291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 11:52:13.387431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 11:52:13.840570       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 11:52:13.840700       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0520 11:52:15.747232       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 11:59:15 no-preload-814599 kubelet[4334]: E0520 11:59:15.234133    4334 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 11:59:15 no-preload-814599 kubelet[4334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 11:59:15 no-preload-814599 kubelet[4334]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 11:59:15 no-preload-814599 kubelet[4334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 11:59:15 no-preload-814599 kubelet[4334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 11:59:23 no-preload-814599 kubelet[4334]: E0520 11:59:23.214817    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 11:59:35 no-preload-814599 kubelet[4334]: E0520 11:59:35.214226    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 11:59:48 no-preload-814599 kubelet[4334]: E0520 11:59:48.213635    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:00:02 no-preload-814599 kubelet[4334]: E0520 12:00:02.214247    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:00:15 no-preload-814599 kubelet[4334]: E0520 12:00:15.233371    4334 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:00:15 no-preload-814599 kubelet[4334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:00:15 no-preload-814599 kubelet[4334]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:00:15 no-preload-814599 kubelet[4334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:00:15 no-preload-814599 kubelet[4334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:00:17 no-preload-814599 kubelet[4334]: E0520 12:00:17.213360    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:00:28 no-preload-814599 kubelet[4334]: E0520 12:00:28.213369    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:00:43 no-preload-814599 kubelet[4334]: E0520 12:00:43.215013    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:00:57 no-preload-814599 kubelet[4334]: E0520 12:00:57.213538    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:01:10 no-preload-814599 kubelet[4334]: E0520 12:01:10.214002    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:01:15 no-preload-814599 kubelet[4334]: E0520 12:01:15.233206    4334 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:01:15 no-preload-814599 kubelet[4334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:01:15 no-preload-814599 kubelet[4334]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:01:15 no-preload-814599 kubelet[4334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:01:15 no-preload-814599 kubelet[4334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:01:25 no-preload-814599 kubelet[4334]: E0520 12:01:25.215631    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	
	
	==> storage-provisioner [0b10ea977d0579b15e4c976dff35f3fcab52bf2c203dac63d829d0c02b894748] <==
	I0520 11:52:30.795684       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 11:52:30.820684       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 11:52:30.821062       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 11:52:30.835193       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 11:52:30.835579       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-814599_bc915668-1408-43bb-9ba0-6078a3bf8d32!
	I0520 11:52:30.836666       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca28e41c-3685-4dce-abaa-1fcd2de91890", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-814599_bc915668-1408-43bb-9ba0-6078a3bf8d32 became leader
	I0520 11:52:30.936173       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-814599_bc915668-1408-43bb-9ba0-6078a3bf8d32!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-814599 -n no-preload-814599
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-814599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-lggvs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-814599 describe pod metrics-server-569cc877fc-lggvs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-814599 describe pod metrics-server-569cc877fc-lggvs: exit status 1 (62.588776ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-lggvs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-814599 describe pod metrics-server-569cc877fc-lggvs: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
E0520 11:54:54.309880   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
E0520 11:57:19.192025   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
E0520 11:57:57.360011   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
E0520 11:59:54.309941   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
E0520 12:02:19.191253   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210420 -n old-k8s-version-210420
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 2 (227.574775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-210420" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 2 (220.370483ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-210420 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-210420 logs -n 25: (1.603814389s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-854547                           | kubernetes-upgrade-854547    | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-814599             | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-120159                                        | pause-120159                 | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:39 UTC |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-793579 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | disable-driver-mounts-793579                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:41 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274976            | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-814599                  | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210420        | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-668445  | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC | 20 May 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274976                 | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210420             | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-668445       | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:44 UTC | 20 May 24 11:51 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:44:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:44:01.940145   58840 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:44:01.940398   58840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:44:01.940409   58840 out.go:304] Setting ErrFile to fd 2...
	I0520 11:44:01.940415   58840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:44:01.940633   58840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:44:01.941201   58840 out.go:298] Setting JSON to false
	I0520 11:44:01.942056   58840 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5185,"bootTime":1716200257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:44:01.942118   58840 start.go:139] virtualization: kvm guest
	I0520 11:44:01.944331   58840 out.go:177] * [default-k8s-diff-port-668445] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:44:01.945948   58840 notify.go:220] Checking for updates...
	I0520 11:44:01.945956   58840 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:44:01.947257   58840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:44:01.948525   58840 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:44:01.949881   58840 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:44:01.951156   58840 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:44:01.952736   58840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:44:01.954565   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:44:01.954932   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:44:01.954979   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:44:01.969623   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0520 11:44:01.969984   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:44:01.970444   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:44:01.970489   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:44:01.970816   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:44:01.971013   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:44:01.971212   58840 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:44:01.971493   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:44:01.971540   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:44:01.985556   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0520 11:44:01.985885   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:44:01.986365   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:44:01.986388   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:44:01.986707   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:44:01.986883   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:44:02.021778   58840 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 11:44:02.022944   58840 start.go:297] selected driver: kvm2
	I0520 11:44:02.022954   58840 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:44:02.023042   58840 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:44:02.023717   58840 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:44:02.023796   58840 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:44:02.038516   58840 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:44:02.038900   58840 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:44:02.038931   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:44:02.038942   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:44:02.038989   58840 start.go:340] cluster config:
	{Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:44:02.039097   58840 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:44:02.041645   58840 out.go:177] * Starting "default-k8s-diff-port-668445" primary control-plane node in "default-k8s-diff-port-668445" cluster
	I0520 11:44:02.042785   58840 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:44:02.042826   58840 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 11:44:02.042839   58840 cache.go:56] Caching tarball of preloaded images
	I0520 11:44:02.042917   58840 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:44:02.042929   58840 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 11:44:02.043041   58840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:44:02.043267   58840 start.go:360] acquireMachinesLock for default-k8s-diff-port-668445: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:44:06.605177   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:09.677144   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:15.757188   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:18.829221   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:24.909165   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:27.981180   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:34.061180   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:37.133203   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:43.213225   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:46.285226   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:52.365216   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:55.437121   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:01.517184   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:04.589206   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:10.669220   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:13.741264   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:19.821209   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:22.893127   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:28.973155   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:32.045217   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:41.129769   58316 start.go:364] duration metric: took 3m9.329208681s to acquireMachinesLock for "old-k8s-version-210420"
	I0520 11:45:41.129841   58316 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:45:41.129847   58316 fix.go:54] fixHost starting: 
	I0520 11:45:41.130178   58316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:45:41.130209   58316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:45:41.145527   58316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0520 11:45:41.145974   58316 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:45:41.146470   58316 main.go:141] libmachine: Using API Version  1
	I0520 11:45:41.146491   58316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:45:41.146803   58316 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:45:41.146962   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:41.147096   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetState
	I0520 11:45:41.148653   58316 fix.go:112] recreateIfNeeded on old-k8s-version-210420: state=Stopped err=<nil>
	I0520 11:45:41.148678   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	W0520 11:45:41.148835   58316 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:45:41.150515   58316 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210420" ...
	I0520 11:45:38.125195   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:41.127376   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:41.127411   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:45:41.127706   57519 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:45:41.127739   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:45:41.127954   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:45:41.129638   57519 machine.go:97] duration metric: took 4m44.663213682s to provisionDockerMachine
	I0520 11:45:41.129681   57519 fix.go:56] duration metric: took 4m44.685919857s for fixHost
	I0520 11:45:41.129692   57519 start.go:83] releasing machines lock for "no-preload-814599", held for 4m44.68594997s
	W0520 11:45:41.129717   57519 start.go:713] error starting host: provision: host is not running
	W0520 11:45:41.129807   57519 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0520 11:45:41.129817   57519 start.go:728] Will try again in 5 seconds ...
	I0520 11:45:41.151697   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .Start
	I0520 11:45:41.151882   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring networks are active...
	I0520 11:45:41.152702   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network default is active
	I0520 11:45:41.153047   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network mk-old-k8s-version-210420 is active
	I0520 11:45:41.153452   58316 main.go:141] libmachine: (old-k8s-version-210420) Getting domain xml...
	I0520 11:45:41.154219   58316 main.go:141] libmachine: (old-k8s-version-210420) Creating domain...
	I0520 11:45:46.134344   57519 start.go:360] acquireMachinesLock for no-preload-814599: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:45:42.344294   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting to get IP...
	I0520 11:45:42.345102   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.345496   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.345567   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.345490   59217 retry.go:31] will retry after 272.13676ms: waiting for machine to come up
	I0520 11:45:42.619253   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.619676   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.619706   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.619633   59217 retry.go:31] will retry after 374.178023ms: waiting for machine to come up
	I0520 11:45:42.995376   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.995793   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.995819   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.995748   59217 retry.go:31] will retry after 462.060487ms: waiting for machine to come up
	I0520 11:45:43.459411   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.459824   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.459860   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.459791   59217 retry.go:31] will retry after 479.382552ms: waiting for machine to come up
	I0520 11:45:43.940363   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.940850   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.940873   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.940802   59217 retry.go:31] will retry after 633.680965ms: waiting for machine to come up
	I0520 11:45:44.575531   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:44.575957   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:44.575985   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:44.575915   59217 retry.go:31] will retry after 755.049677ms: waiting for machine to come up
	I0520 11:45:45.332864   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:45.333335   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:45.333368   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:45.333292   59217 retry.go:31] will retry after 757.292563ms: waiting for machine to come up
	I0520 11:45:46.092128   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:46.092583   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:46.092605   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:46.092544   59217 retry.go:31] will retry after 1.150182695s: waiting for machine to come up
	I0520 11:45:47.244071   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:47.244526   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:47.244588   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:47.244474   59217 retry.go:31] will retry after 1.316447926s: waiting for machine to come up
	I0520 11:45:48.562914   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:48.563296   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:48.563326   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:48.563240   59217 retry.go:31] will retry after 1.615920757s: waiting for machine to come up
	I0520 11:45:50.180999   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:50.181361   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:50.181395   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:50.181336   59217 retry.go:31] will retry after 2.441112889s: waiting for machine to come up
	I0520 11:45:52.624374   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:52.624739   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:52.624771   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:52.624722   59217 retry.go:31] will retry after 2.505225423s: waiting for machine to come up
	I0520 11:45:55.133339   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:55.133626   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:55.133655   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:55.133596   59217 retry.go:31] will retry after 4.197262735s: waiting for machine to come up
	I0520 11:46:00.745820   58398 start.go:364] duration metric: took 3m22.67445444s to acquireMachinesLock for "embed-certs-274976"
	I0520 11:46:00.745877   58398 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:00.745887   58398 fix.go:54] fixHost starting: 
	I0520 11:46:00.746287   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:00.746320   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:00.762678   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45595
	I0520 11:46:00.763132   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:00.763654   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:00.763681   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:00.763991   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:00.764154   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:00.764308   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:00.765826   58398 fix.go:112] recreateIfNeeded on embed-certs-274976: state=Stopped err=<nil>
	I0520 11:46:00.765861   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	W0520 11:46:00.766017   58398 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:00.768362   58398 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274976" ...
	I0520 11:45:59.334216   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334678   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has current primary IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334700   58316 main.go:141] libmachine: (old-k8s-version-210420) Found IP for machine: 192.168.83.100
	I0520 11:45:59.334712   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserving static IP address...
	I0520 11:45:59.335134   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.335166   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | skip adding static IP to network mk-old-k8s-version-210420 - found existing host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"}
	I0520 11:45:59.335180   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserved static IP address: 192.168.83.100
	I0520 11:45:59.335197   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting for SSH to be available...
	I0520 11:45:59.335212   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Getting to WaitForSSH function...
	I0520 11:45:59.337206   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337476   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.337501   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337634   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH client type: external
	I0520 11:45:59.337681   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa (-rw-------)
	I0520 11:45:59.337716   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:45:59.337732   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | About to run SSH command:
	I0520 11:45:59.337743   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | exit 0
	I0520 11:45:59.460960   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | SSH cmd err, output: <nil>: 
	I0520 11:45:59.461347   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:45:59.461978   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.464416   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.464734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.464756   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.465018   58316 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:45:59.465183   58316 machine.go:94] provisionDockerMachine start ...
	I0520 11:45:59.465201   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:59.465416   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.467457   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467751   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.467782   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467892   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.468050   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468204   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468330   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.468474   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.468655   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.468665   58316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:45:59.573237   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:45:59.573264   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573558   58316 buildroot.go:166] provisioning hostname "old-k8s-version-210420"
	I0520 11:45:59.573586   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.576245   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.576770   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.576778   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576957   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577122   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577267   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.577413   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.577620   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.577639   58316 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210420 && echo "old-k8s-version-210420" | sudo tee /etc/hostname
	I0520 11:45:59.695529   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210420
	
	I0520 11:45:59.695573   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.698545   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698808   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.698835   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698964   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.699144   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699331   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699427   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.699594   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.699744   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.699767   58316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:45:59.814736   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:59.814765   58316 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:45:59.814820   58316 buildroot.go:174] setting up certificates
	I0520 11:45:59.814832   58316 provision.go:84] configureAuth start
	I0520 11:45:59.814841   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.815158   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.817591   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.817917   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.817946   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.818103   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.820205   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820474   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.820499   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820625   58316 provision.go:143] copyHostCerts
	I0520 11:45:59.820681   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:45:59.820698   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:45:59.820782   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:45:59.820887   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:45:59.820896   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:45:59.820921   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:45:59.821006   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:45:59.821015   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:45:59.821038   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:45:59.821092   58316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210420 san=[127.0.0.1 192.168.83.100 localhost minikube old-k8s-version-210420]
	I0520 11:46:00.093158   58316 provision.go:177] copyRemoteCerts
	I0520 11:46:00.093218   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:00.093251   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.095725   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096033   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.096055   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096180   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.096371   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.096549   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.096693   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.179313   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:00.203235   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 11:46:00.226670   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:00.250832   58316 provision.go:87] duration metric: took 435.987724ms to configureAuth
	I0520 11:46:00.250868   58316 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:00.251054   58316 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:46:00.251183   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.253679   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.253986   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.254020   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.254118   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.254305   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254489   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254626   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.254777   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.255000   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.255023   58316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:00.512174   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:00.512207   58316 machine.go:97] duration metric: took 1.047011014s to provisionDockerMachine
	I0520 11:46:00.512222   58316 start.go:293] postStartSetup for "old-k8s-version-210420" (driver="kvm2")
	I0520 11:46:00.512236   58316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:00.512259   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.512546   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:00.512577   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.515025   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515358   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.515386   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515512   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.515696   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.515879   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.516004   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.599956   58316 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:00.604374   58316 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:00.604399   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:00.604486   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:00.604575   58316 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:00.604704   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:00.614371   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:00.638093   58316 start.go:296] duration metric: took 125.858293ms for postStartSetup
	I0520 11:46:00.638127   58316 fix.go:56] duration metric: took 19.508279908s for fixHost
	I0520 11:46:00.638151   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.640770   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641148   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.641192   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641352   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.641539   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641702   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641844   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.641985   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.642192   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.642207   58316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:00.745686   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205560.721975623
	
	I0520 11:46:00.745708   58316 fix.go:216] guest clock: 1716205560.721975623
	I0520 11:46:00.745716   58316 fix.go:229] Guest: 2024-05-20 11:46:00.721975623 +0000 UTC Remote: 2024-05-20 11:46:00.638130573 +0000 UTC m=+208.973304879 (delta=83.84505ms)
	I0520 11:46:00.745736   58316 fix.go:200] guest clock delta is within tolerance: 83.84505ms
	I0520 11:46:00.745740   58316 start.go:83] releasing machines lock for "old-k8s-version-210420", held for 19.615924008s
	I0520 11:46:00.745762   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.746032   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:00.748741   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749123   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.749146   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749288   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749932   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.750021   58316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:00.750070   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.750200   58316 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:00.750225   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.752648   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.752923   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753019   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753045   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753156   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753244   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753273   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753343   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753420   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753519   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753582   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753650   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.753744   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753879   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	W0520 11:46:00.839633   58316 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:00.839741   58316 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:00.861801   58316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:01.011737   58316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:01.018898   58316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:01.018964   58316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:01.035600   58316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:01.035622   58316 start.go:494] detecting cgroup driver to use...
	I0520 11:46:01.035698   58316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:01.051692   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:01.065994   58316 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:01.066083   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:01.079249   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:01.093862   58316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:01.206394   58316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:01.362764   58316 docker.go:233] disabling docker service ...
	I0520 11:46:01.362842   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:01.378012   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:01.393231   58316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:01.535932   58316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:01.677217   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:01.691602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:01.716408   58316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 11:46:01.716457   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.729460   58316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:01.729541   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.740455   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.751846   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.763174   58316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:01.775353   58316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:01.786021   58316 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:01.786088   58316 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:01.801529   58316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:01.812523   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:01.927496   58316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:02.084059   58316 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:02.084137   58316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:02.089878   58316 start.go:562] Will wait 60s for crictl version
	I0520 11:46:02.089925   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:02.094050   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:02.134859   58316 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:02.134936   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.166705   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.196374   58316 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 11:46:00.769810   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Start
	I0520 11:46:00.769988   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring networks are active...
	I0520 11:46:00.770615   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring network default is active
	I0520 11:46:00.770928   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring network mk-embed-certs-274976 is active
	I0520 11:46:00.771273   58398 main.go:141] libmachine: (embed-certs-274976) Getting domain xml...
	I0520 11:46:00.771977   58398 main.go:141] libmachine: (embed-certs-274976) Creating domain...
	I0520 11:46:02.023816   58398 main.go:141] libmachine: (embed-certs-274976) Waiting to get IP...
	I0520 11:46:02.024616   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.025031   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.025111   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.025009   59335 retry.go:31] will retry after 218.274206ms: waiting for machine to come up
	I0520 11:46:02.244661   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.245135   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.245163   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.245110   59335 retry.go:31] will retry after 359.160276ms: waiting for machine to come up
	I0520 11:46:02.606589   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.607106   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.607129   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.607044   59335 retry.go:31] will retry after 462.479953ms: waiting for machine to come up
	I0520 11:46:02.197655   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:02.200774   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201186   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:02.201211   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201385   58316 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:02.205589   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:02.217983   58316 kubeadm.go:877] updating cluster {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:02.218091   58316 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:46:02.218135   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:02.265907   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:02.265970   58316 ssh_runner.go:195] Run: which lz4
	I0520 11:46:02.271466   58316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:02.276993   58316 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:02.277017   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 11:46:04.082571   58316 crio.go:462] duration metric: took 1.811145307s to copy over tarball
	I0520 11:46:04.082648   58316 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:03.071686   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:03.072242   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:03.072271   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:03.072198   59335 retry.go:31] will retry after 531.663033ms: waiting for machine to come up
	I0520 11:46:03.606121   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:03.606629   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:03.606653   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:03.606581   59335 retry.go:31] will retry after 690.866851ms: waiting for machine to come up
	I0520 11:46:04.299326   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:04.299799   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:04.299829   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:04.299750   59335 retry.go:31] will retry after 928.456543ms: waiting for machine to come up
	I0520 11:46:05.230115   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:05.230568   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:05.230586   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:05.230539   59335 retry.go:31] will retry after 761.597702ms: waiting for machine to come up
	I0520 11:46:05.993511   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:05.993897   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:05.993930   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:05.993854   59335 retry.go:31] will retry after 1.405329972s: waiting for machine to come up
	I0520 11:46:07.401706   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:07.402246   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:07.402312   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:07.402223   59335 retry.go:31] will retry after 1.717177668s: waiting for machine to come up
	I0520 11:46:06.952510   58316 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.869830998s)
	I0520 11:46:06.952533   58316 crio.go:469] duration metric: took 2.869936445s to extract the tarball
	I0520 11:46:06.952541   58316 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:06.996387   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:07.032067   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:07.032093   58316 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:46:07.032230   58316 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.032216   58316 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 11:46:07.032350   58316 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.032411   58316 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.032692   58316 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.033198   58316 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034239   58316 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.034327   58316 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 11:46:07.034404   58316 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.034442   58316 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.207835   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.209772   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 11:46:07.278794   58316 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 11:46:07.278843   58316 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.278895   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.284049   58316 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 11:46:07.284098   58316 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 11:46:07.284149   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.286345   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.289361   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 11:46:07.331714   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 11:46:07.337994   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 11:46:07.373533   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.380715   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.385188   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.390552   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.410661   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.481535   58316 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 11:46:07.481576   58316 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.481626   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494357   58316 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 11:46:07.494374   58316 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 11:46:07.494402   58316 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.494404   58316 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.507981   58316 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 11:46:07.508023   58316 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.508069   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.516898   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.517006   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.517019   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.517060   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.517248   58316 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 11:46:07.517282   58316 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.517309   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.581093   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.581200   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 11:46:07.622963   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 11:46:07.623008   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 11:46:07.623069   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 11:46:07.632151   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 11:46:07.915373   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:08.058938   58316 cache_images.go:92] duration metric: took 1.026816798s to LoadCachedImages
	W0520 11:46:08.059038   58316 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0520 11:46:08.059057   58316 kubeadm.go:928] updating node { 192.168.83.100 8443 v1.20.0 crio true true} ...
	I0520 11:46:08.059208   58316 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:08.059295   58316 ssh_runner.go:195] Run: crio config
	I0520 11:46:08.131486   58316 cni.go:84] Creating CNI manager for ""
	I0520 11:46:08.131511   58316 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:08.131532   58316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:08.131561   58316 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.100 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210420 NodeName:old-k8s-version-210420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:46:08.131724   58316 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210420"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:08.131800   58316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:46:08.143109   58316 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:08.143202   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:08.153988   58316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0520 11:46:08.172321   58316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:08.190157   58316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0520 11:46:08.208084   58316 ssh_runner.go:195] Run: grep 192.168.83.100	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:08.212211   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:08.225134   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:08.350083   58316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:08.368185   58316 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420 for IP: 192.168.83.100
	I0520 11:46:08.368202   58316 certs.go:194] generating shared ca certs ...
	I0520 11:46:08.368216   58316 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.368372   58316 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:08.368422   58316 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:08.368435   58316 certs.go:256] generating profile certs ...
	I0520 11:46:08.368552   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key
	I0520 11:46:08.368620   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9
	I0520 11:46:08.368664   58316 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key
	I0520 11:46:08.368806   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:08.368839   58316 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:08.368852   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:08.368884   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:08.368914   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:08.368967   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:08.369020   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:08.369674   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:08.402860   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:08.432963   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:08.458740   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:08.490622   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 11:46:08.529627   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:08.563064   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:08.604332   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 11:46:08.632006   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:08.656708   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:08.680552   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:08.704942   58316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:08.723543   58316 ssh_runner.go:195] Run: openssl version
	I0520 11:46:08.729776   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:08.742608   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747374   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747417   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.753504   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:08.765976   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:08.777365   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781823   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.787550   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:08.798382   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:08.810174   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814810   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.820897   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:08.832012   58316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:08.836507   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:08.842727   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:08.849208   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:08.855730   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:08.862050   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:08.867819   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:08.873633   58316 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:08.873750   58316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:08.873800   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.912922   58316 cri.go:89] found id: ""
	I0520 11:46:08.913010   58316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:08.924046   58316 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:08.924084   58316 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:08.924092   58316 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:08.924154   58316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:08.934812   58316 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:08.936218   58316 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210420" does not appear in /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:08.937232   58316 kubeconfig.go:62] /home/jenkins/minikube-integration/18925-3819/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210420" cluster setting kubeconfig missing "old-k8s-version-210420" context setting]
	I0520 11:46:08.938681   58316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.947598   58316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:08.958245   58316 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.100
	I0520 11:46:08.958280   58316 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:08.958293   58316 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:08.958349   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.996333   58316 cri.go:89] found id: ""
	I0520 11:46:08.996413   58316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:09.015359   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:09.026711   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:09.026733   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:09.026784   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:09.036886   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:09.036954   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:09.046584   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:09.056035   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:09.056095   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:09.067501   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.077064   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:09.077123   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.086820   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:09.096267   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:09.096346   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:09.106407   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:09.116143   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.240367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.968603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.251776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.363870   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.466177   58316 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:10.466284   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:10.967012   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:11.467113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:09.121909   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:09.122323   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:09.122358   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:09.122261   59335 retry.go:31] will retry after 1.49678652s: waiting for machine to come up
	I0520 11:46:10.620875   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:10.621226   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:10.621255   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:10.621184   59335 retry.go:31] will retry after 2.772139112s: waiting for machine to come up
	I0520 11:46:11.966365   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.466509   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.966500   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.467280   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.966838   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.466319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.966768   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.466848   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.966953   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:16.466890   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.396309   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:13.396705   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:13.396725   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:13.396678   59335 retry.go:31] will retry after 2.987909039s: waiting for machine to come up
	I0520 11:46:16.386007   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:16.386500   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:16.386527   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:16.386460   59335 retry.go:31] will retry after 4.351760111s: waiting for machine to come up
	I0520 11:46:16.967036   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.466679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.966631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.466439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.966783   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.967319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.466315   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.966513   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:21.467069   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.138009   58840 start.go:364] duration metric: took 2m20.094710429s to acquireMachinesLock for "default-k8s-diff-port-668445"
	I0520 11:46:22.138085   58840 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:22.138096   58840 fix.go:54] fixHost starting: 
	I0520 11:46:22.138535   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:22.138577   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:22.154345   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0520 11:46:22.154701   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:22.155172   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:22.155198   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:22.155492   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:22.155667   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:22.155834   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:22.157486   58840 fix.go:112] recreateIfNeeded on default-k8s-diff-port-668445: state=Stopped err=<nil>
	I0520 11:46:22.157525   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	W0520 11:46:22.157703   58840 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:22.159724   58840 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-668445" ...
	I0520 11:46:20.742954   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.743480   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has current primary IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.743503   58398 main.go:141] libmachine: (embed-certs-274976) Found IP for machine: 192.168.50.167
	I0520 11:46:20.743517   58398 main.go:141] libmachine: (embed-certs-274976) Reserving static IP address...
	I0520 11:46:20.743907   58398 main.go:141] libmachine: (embed-certs-274976) Reserved static IP address: 192.168.50.167
	I0520 11:46:20.743930   58398 main.go:141] libmachine: (embed-certs-274976) Waiting for SSH to be available...
	I0520 11:46:20.743955   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "embed-certs-274976", mac: "52:54:00:8b:95:29", ip: "192.168.50.167"} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.744000   58398 main.go:141] libmachine: (embed-certs-274976) DBG | skip adding static IP to network mk-embed-certs-274976 - found existing host DHCP lease matching {name: "embed-certs-274976", mac: "52:54:00:8b:95:29", ip: "192.168.50.167"}
	I0520 11:46:20.744017   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Getting to WaitForSSH function...
	I0520 11:46:20.746685   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.747071   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.747112   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.747244   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Using SSH client type: external
	I0520 11:46:20.747259   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa (-rw-------)
	I0520 11:46:20.747278   58398 main.go:141] libmachine: (embed-certs-274976) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:46:20.747287   58398 main.go:141] libmachine: (embed-certs-274976) DBG | About to run SSH command:
	I0520 11:46:20.747295   58398 main.go:141] libmachine: (embed-certs-274976) DBG | exit 0
	I0520 11:46:20.872997   58398 main.go:141] libmachine: (embed-certs-274976) DBG | SSH cmd err, output: <nil>: 
	I0520 11:46:20.873377   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetConfigRaw
	I0520 11:46:20.874039   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:20.876476   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.876827   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.876859   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.877169   58398 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/config.json ...
	I0520 11:46:20.877407   58398 machine.go:94] provisionDockerMachine start ...
	I0520 11:46:20.877432   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:20.877643   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:20.879744   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.880089   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.880113   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.880328   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:20.880529   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.880673   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.880840   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:20.881034   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:20.881263   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:20.881275   58398 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:46:20.993535   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:46:20.993572   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:20.993813   58398 buildroot.go:166] provisioning hostname "embed-certs-274976"
	I0520 11:46:20.993841   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:20.994034   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:20.996567   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.996922   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.996970   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.997092   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:20.997280   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.997437   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.997580   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:20.997734   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:20.997993   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:20.998013   58398 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274976 && echo "embed-certs-274976" | sudo tee /etc/hostname
	I0520 11:46:21.125482   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274976
	
	I0520 11:46:21.125512   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.128347   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.128717   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.128746   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.128916   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.129120   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.129296   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.129436   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.129613   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:21.129774   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:21.129789   58398 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274976/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:46:21.250287   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:46:21.250320   58398 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:46:21.250371   58398 buildroot.go:174] setting up certificates
	I0520 11:46:21.250385   58398 provision.go:84] configureAuth start
	I0520 11:46:21.250403   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:21.250726   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:21.253610   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.254038   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.254053   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.254303   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.256633   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.257007   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.257046   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.257206   58398 provision.go:143] copyHostCerts
	I0520 11:46:21.257266   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:46:21.257289   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:46:21.257369   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:46:21.257489   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:46:21.257499   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:46:21.257537   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:46:21.257617   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:46:21.257626   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:46:21.257661   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:46:21.257729   58398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274976 san=[127.0.0.1 192.168.50.167 embed-certs-274976 localhost minikube]
	I0520 11:46:21.458096   58398 provision.go:177] copyRemoteCerts
	I0520 11:46:21.458152   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:21.458178   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.460662   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.461006   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.461042   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.461205   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.461370   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.461545   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.461705   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:21.547703   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:21.574345   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 11:46:21.598843   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:21.624064   58398 provision.go:87] duration metric: took 373.664517ms to configureAuth
	I0520 11:46:21.624091   58398 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:21.624262   58398 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:21.624330   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.626996   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.627378   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.627406   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.627556   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.627711   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.627902   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.628077   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.628231   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:21.628402   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:21.628420   58398 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:21.894896   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:21.894923   58398 machine.go:97] duration metric: took 1.017499149s to provisionDockerMachine
	I0520 11:46:21.894936   58398 start.go:293] postStartSetup for "embed-certs-274976" (driver="kvm2")
	I0520 11:46:21.894948   58398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:21.894966   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:21.895279   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:21.895304   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.897866   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.898160   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.898193   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.898292   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.898484   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.898663   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.898779   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:21.984571   58398 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:21.988891   58398 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:21.988918   58398 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:21.988995   58398 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:21.989129   58398 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:21.989257   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:21.999219   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:22.023032   58398 start.go:296] duration metric: took 128.082095ms for postStartSetup
	I0520 11:46:22.023076   58398 fix.go:56] duration metric: took 21.277188857s for fixHost
	I0520 11:46:22.023094   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.025971   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.026362   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.026393   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.026536   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.026725   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.026867   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.027017   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.027246   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:22.027424   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:22.027437   58398 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:22.137813   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205582.113779638
	
	I0520 11:46:22.137840   58398 fix.go:216] guest clock: 1716205582.113779638
	I0520 11:46:22.137850   58398 fix.go:229] Guest: 2024-05-20 11:46:22.113779638 +0000 UTC Remote: 2024-05-20 11:46:22.023079043 +0000 UTC m=+224.083510648 (delta=90.700595ms)
	I0520 11:46:22.137898   58398 fix.go:200] guest clock delta is within tolerance: 90.700595ms
	I0520 11:46:22.137906   58398 start.go:83] releasing machines lock for "embed-certs-274976", held for 21.39205527s
	I0520 11:46:22.137943   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.138222   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:22.140951   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.141358   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.141388   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.141558   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142024   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142187   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142245   58398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:22.142291   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.142395   58398 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:22.142417   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.144896   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145116   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145386   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.145419   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145490   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.145518   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145525   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.145647   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.145703   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.145827   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.145849   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.146019   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.146022   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:22.146175   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	W0520 11:46:22.226345   58398 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:22.226421   58398 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:22.248678   58398 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:22.404298   58398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:22.410174   58398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:22.410249   58398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:22.426286   58398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:22.426311   58398 start.go:494] detecting cgroup driver to use...
	I0520 11:46:22.426369   58398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:22.441746   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:22.455445   58398 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:22.455515   58398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:22.469441   58398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:22.485963   58398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:22.604443   58398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:22.758542   58398 docker.go:233] disabling docker service ...
	I0520 11:46:22.758628   58398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:22.773371   58398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:22.786871   58398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:22.940022   58398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:23.076857   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:23.091980   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:23.111380   58398 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:46:23.111434   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.122532   58398 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:23.122597   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.134355   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.145484   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.156776   58398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:23.169017   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.180681   58398 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.205664   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.217267   58398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:23.228490   58398 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:23.228555   58398 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:23.243578   58398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:23.255394   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:23.387407   58398 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:23.537543   58398 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:23.537613   58398 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:23.542886   58398 start.go:562] Will wait 60s for crictl version
	I0520 11:46:23.542934   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:46:23.546740   58398 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:23.587662   58398 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:23.587746   58398 ssh_runner.go:195] Run: crio --version
	I0520 11:46:23.615735   58398 ssh_runner.go:195] Run: crio --version
	I0520 11:46:23.648727   58398 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:46:21.966776   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.466411   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.967037   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.467218   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.966606   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.466990   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.966805   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.966829   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:26.466968   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.161646   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Start
	I0520 11:46:22.161813   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring networks are active...
	I0520 11:46:22.162609   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring network default is active
	I0520 11:46:22.163020   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring network mk-default-k8s-diff-port-668445 is active
	I0520 11:46:22.163509   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Getting domain xml...
	I0520 11:46:22.164146   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Creating domain...
	I0520 11:46:23.456104   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting to get IP...
	I0520 11:46:23.457157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.457703   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.457735   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:23.457652   59511 retry.go:31] will retry after 262.047261ms: waiting for machine to come up
	I0520 11:46:23.721329   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.721756   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.721815   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:23.721742   59511 retry.go:31] will retry after 301.868596ms: waiting for machine to come up
	I0520 11:46:24.025477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.026042   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.026072   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:24.026006   59511 retry.go:31] will retry after 431.470276ms: waiting for machine to come up
	I0520 11:46:24.458962   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.459477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.459564   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:24.459450   59511 retry.go:31] will retry after 566.093902ms: waiting for machine to come up
	I0520 11:46:25.027215   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.027787   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.027812   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:25.027729   59511 retry.go:31] will retry after 471.2178ms: waiting for machine to come up
	I0520 11:46:25.500575   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.501067   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.501106   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:25.501010   59511 retry.go:31] will retry after 682.542082ms: waiting for machine to come up
	I0520 11:46:26.184886   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:26.185367   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:26.185400   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:26.185315   59511 retry.go:31] will retry after 973.33325ms: waiting for machine to come up
	I0520 11:46:23.650320   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:23.653452   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:23.653919   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:23.653962   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:23.654259   58398 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:23.658592   58398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:23.671310   58398 kubeadm.go:877] updating cluster {Name:embed-certs-274976 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:23.671460   58398 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:46:23.671523   58398 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:23.715873   58398 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:46:23.715947   58398 ssh_runner.go:195] Run: which lz4
	I0520 11:46:23.720362   58398 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:23.724821   58398 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:23.724852   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:46:25.254824   58398 crio.go:462] duration metric: took 1.534501958s to copy over tarball
	I0520 11:46:25.254937   58398 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:27.527739   58398 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.272764169s)
	I0520 11:46:27.527773   58398 crio.go:469] duration metric: took 2.272914971s to extract the tarball
	I0520 11:46:27.527780   58398 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:27.564738   58398 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:27.613196   58398 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:46:27.613220   58398 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:46:27.613229   58398 kubeadm.go:928] updating node { 192.168.50.167 8443 v1.30.1 crio true true} ...
	I0520 11:46:27.613321   58398 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:27.613381   58398 ssh_runner.go:195] Run: crio config
	I0520 11:46:27.669907   58398 cni.go:84] Creating CNI manager for ""
	I0520 11:46:27.669931   58398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:27.669948   58398 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:27.669969   58398 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.167 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274976 NodeName:embed-certs-274976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:46:27.670117   58398 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274976"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:27.670180   58398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:46:27.680414   58398 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:27.680473   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:27.690431   58398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0520 11:46:27.707625   58398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:27.724120   58398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0520 11:46:27.744218   58398 ssh_runner.go:195] Run: grep 192.168.50.167	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:27.748173   58398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:27.760752   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:27.891061   58398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:27.909326   58398 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976 for IP: 192.168.50.167
	I0520 11:46:27.909354   58398 certs.go:194] generating shared ca certs ...
	I0520 11:46:27.909387   58398 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:27.909574   58398 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:27.909637   58398 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:27.909650   58398 certs.go:256] generating profile certs ...
	I0520 11:46:27.909754   58398 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/client.key
	I0520 11:46:27.909843   58398 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.key.d6be4a1f
	I0520 11:46:27.909922   58398 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.key
	I0520 11:46:27.910072   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:27.910127   58398 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:27.910146   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:27.910183   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:27.910208   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:27.910229   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:27.910278   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:27.911030   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:27.949250   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:26.966832   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.466585   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.967225   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.466286   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.966679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.466425   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.966440   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.467175   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.966457   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.466596   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.160027   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:27.160492   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:27.160515   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:27.160435   59511 retry.go:31] will retry after 1.263563226s: waiting for machine to come up
	I0520 11:46:28.426016   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:28.426503   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:28.426555   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:28.426463   59511 retry.go:31] will retry after 1.407211588s: waiting for machine to come up
	I0520 11:46:29.835680   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:29.836101   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:29.836121   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:29.836058   59511 retry.go:31] will retry after 2.320334078s: waiting for machine to come up
	I0520 11:46:27.987645   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:28.036652   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:28.083998   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0520 11:46:28.116104   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:28.154695   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:28.181013   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:46:28.213382   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:28.243550   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:28.268537   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:28.292856   58398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:28.310801   58398 ssh_runner.go:195] Run: openssl version
	I0520 11:46:28.316724   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:28.327461   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.332286   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.332345   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.339005   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:28.351192   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:28.362639   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.367566   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.367623   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.373275   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:28.384549   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:28.395600   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.400136   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.400188   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.405914   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:28.417533   58398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:28.422327   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:28.428746   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:28.434840   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:28.441227   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:28.447310   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:28.453450   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:28.462637   58398 kubeadm.go:391] StartCluster: {Name:embed-certs-274976 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:28.462746   58398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:28.462828   58398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:28.501759   58398 cri.go:89] found id: ""
	I0520 11:46:28.501826   58398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:28.512536   58398 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:28.512554   58398 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:28.512559   58398 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:28.512603   58398 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:28.522288   58398 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:28.523266   58398 kubeconfig.go:125] found "embed-certs-274976" server: "https://192.168.50.167:8443"
	I0520 11:46:28.525188   58398 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:28.535056   58398 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.167
	I0520 11:46:28.535092   58398 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:28.535104   58398 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:28.535157   58398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:28.573317   58398 cri.go:89] found id: ""
	I0520 11:46:28.573429   58398 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:28.593111   58398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:28.603312   58398 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:28.603345   58398 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:28.603398   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:28.612794   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:28.612870   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:28.622819   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:28.634550   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:28.634614   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:28.644568   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:28.654918   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:28.654980   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:28.665581   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:28.677090   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:28.677167   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:28.687152   58398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:28.697505   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:28.833183   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:29.963435   58398 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.130211655s)
	I0520 11:46:29.963486   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.185882   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.248946   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.334924   58398 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:30.335020   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.835637   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.336028   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.354314   58398 api_server.go:72] duration metric: took 1.01937754s to wait for apiserver process to appear ...
	I0520 11:46:31.354343   58398 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:46:31.354365   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:31.354916   58398 api_server.go:269] stopped: https://192.168.50.167:8443/healthz: Get "https://192.168.50.167:8443/healthz": dial tcp 192.168.50.167:8443: connect: connection refused
	I0520 11:46:31.854629   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.299609   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.299653   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.299669   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.370279   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.370304   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.370317   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.378426   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.378448   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.855034   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.860164   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:34.860205   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:35.354583   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:35.362461   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:35.362489   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:35.854836   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:35.859322   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I0520 11:46:35.867050   58398 api_server.go:141] control plane version: v1.30.1
	I0520 11:46:35.867077   58398 api_server.go:131] duration metric: took 4.512726716s to wait for apiserver health ...
	I0520 11:46:35.867088   58398 cni.go:84] Creating CNI manager for ""
	I0520 11:46:35.867096   58398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:35.869427   58398 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:46:31.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.467311   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.966392   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.967113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.467393   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.966336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.466951   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.967302   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:36.467339   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.158195   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:32.158685   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:32.158712   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:32.158637   59511 retry.go:31] will retry after 2.733920459s: waiting for machine to come up
	I0520 11:46:34.893892   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:34.894436   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:34.894480   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:34.894374   59511 retry.go:31] will retry after 3.119057685s: waiting for machine to come up
	I0520 11:46:35.871058   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:46:35.898211   58398 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:46:35.942295   58398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:46:35.954120   58398 system_pods.go:59] 8 kube-system pods found
	I0520 11:46:35.954156   58398 system_pods.go:61] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:46:35.954171   58398 system_pods.go:61] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:46:35.954181   58398 system_pods.go:61] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:46:35.954194   58398 system_pods.go:61] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:46:35.954204   58398 system_pods.go:61] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0520 11:46:35.954213   58398 system_pods.go:61] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:46:35.954220   58398 system_pods.go:61] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:46:35.954229   58398 system_pods.go:61] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 11:46:35.954239   58398 system_pods.go:74] duration metric: took 11.924168ms to wait for pod list to return data ...
	I0520 11:46:35.954249   58398 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:46:35.957627   58398 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:46:35.957664   58398 node_conditions.go:123] node cpu capacity is 2
	I0520 11:46:35.957679   58398 node_conditions.go:105] duration metric: took 3.421729ms to run NodePressure ...
	I0520 11:46:35.957700   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:36.235314   58398 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:46:36.241075   58398 kubeadm.go:733] kubelet initialised
	I0520 11:46:36.241101   58398 kubeadm.go:734] duration metric: took 5.759765ms waiting for restarted kubelet to initialise ...
	I0520 11:46:36.241110   58398 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:36.248030   58398 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.254230   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.254263   58398 pod_ready.go:81] duration metric: took 6.20401ms for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.254274   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.254281   58398 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.259348   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "etcd-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.259379   58398 pod_ready.go:81] duration metric: took 5.088735ms for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.259392   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "etcd-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.259402   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.266963   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.266997   58398 pod_ready.go:81] duration metric: took 7.580054ms for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.267010   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.267020   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.346321   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.346353   58398 pod_ready.go:81] duration metric: took 79.314801ms for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.346363   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.346371   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.746645   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-proxy-6n9d5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.746674   58398 pod_ready.go:81] duration metric: took 400.291751ms for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.746686   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-proxy-6n9d5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.746692   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:37.146413   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.146443   58398 pod_ready.go:81] duration metric: took 399.744378ms for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:37.146454   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.146465   58398 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:37.545943   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.545966   58398 pod_ready.go:81] duration metric: took 399.491619ms for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:37.545974   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.545981   58398 pod_ready.go:38] duration metric: took 1.304861628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:37.545997   58398 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:46:37.558076   58398 ops.go:34] apiserver oom_adj: -16
	I0520 11:46:37.558097   58398 kubeadm.go:591] duration metric: took 9.045531679s to restartPrimaryControlPlane
	I0520 11:46:37.558106   58398 kubeadm.go:393] duration metric: took 9.095477048s to StartCluster
	I0520 11:46:37.558125   58398 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:37.558199   58398 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:37.559701   58398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:37.559920   58398 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:46:37.561751   58398 out.go:177] * Verifying Kubernetes components...
	I0520 11:46:37.560038   58398 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:46:37.560143   58398 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:37.563096   58398 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274976"
	I0520 11:46:37.563106   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:37.563124   58398 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274976"
	W0520 11:46:37.563133   58398 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:46:37.563156   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.563164   58398 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274976"
	I0520 11:46:37.563179   58398 addons.go:69] Setting metrics-server=true in profile "embed-certs-274976"
	I0520 11:46:37.563193   58398 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274976"
	I0520 11:46:37.563204   58398 addons.go:234] Setting addon metrics-server=true in "embed-certs-274976"
	W0520 11:46:37.563215   58398 addons.go:243] addon metrics-server should already be in state true
	I0520 11:46:37.563241   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.563516   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563558   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.563575   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563600   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.563614   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563638   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.577924   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0520 11:46:37.578096   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0520 11:46:37.578120   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0520 11:46:37.578322   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578480   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578527   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578851   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.578879   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.578982   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.579003   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.579019   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.579038   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.579202   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579342   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579364   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579385   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.579824   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.579867   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.579871   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.579895   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.582214   58398 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274976"
	W0520 11:46:37.582230   58398 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:46:37.582249   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.582511   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.582542   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.594308   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0520 11:46:37.594672   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.595047   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.595067   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.595525   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.595551   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0520 11:46:37.595709   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.596037   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.596545   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.596568   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.596980   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.597579   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.597599   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.597802   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.599963   58398 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:46:37.598457   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I0520 11:46:37.601485   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:46:37.601505   58398 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:46:37.601527   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.601793   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.602219   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.602238   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.602502   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.602688   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.604122   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.604188   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.605753   58398 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:37.604566   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.604874   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.607190   58398 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:37.607205   58398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:46:37.607221   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.607240   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.607263   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.607414   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.607729   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.610088   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.610462   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.610483   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.610738   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.610912   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.611072   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.611212   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.613655   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37269
	I0520 11:46:37.613938   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.614386   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.614403   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.614653   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.614826   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.616151   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.616331   58398 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:37.616345   58398 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:46:37.616362   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.619627   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.620050   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.620072   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.620214   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.620365   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.620480   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.620660   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.734680   58398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:37.751098   58398 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274976" to be "Ready" ...
	I0520 11:46:37.842846   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:46:37.842867   58398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:46:37.853894   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:37.862145   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:46:37.862167   58398 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:46:37.883656   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:37.884046   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:37.884061   58398 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:46:37.941086   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:38.187392   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.187415   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.187819   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.187841   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.187855   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.187861   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.187822   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.188095   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.188115   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.193695   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.193710   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.193926   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.193942   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.813454   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.813472   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.813794   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.813813   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.813821   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.813843   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.813894   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.814140   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.814154   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897046   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.897078   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.897406   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.897453   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.897469   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897494   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.897505   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.897755   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.897759   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.897772   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897784   58398 addons.go:470] Verifying addon metrics-server=true in "embed-certs-274976"
	I0520 11:46:38.899910   58398 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0520 11:46:36.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.467251   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.967153   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.967236   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.466627   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.966560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.466358   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.967165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:41.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.014993   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:38.015405   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:38.015458   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:38.015373   59511 retry.go:31] will retry after 4.505016985s: waiting for machine to come up
	I0520 11:46:38.901272   58398 addons.go:505] duration metric: took 1.341257876s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0520 11:46:39.754549   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:41.758717   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:43.865868   57519 start.go:364] duration metric: took 57.7314587s to acquireMachinesLock for "no-preload-814599"
	I0520 11:46:43.865920   57519 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:43.865931   57519 fix.go:54] fixHost starting: 
	I0520 11:46:43.866311   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:43.866347   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:43.885634   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0520 11:46:43.886068   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:43.886620   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:46:43.886659   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:43.887013   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:43.887200   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:46:43.887337   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:46:43.888902   57519 fix.go:112] recreateIfNeeded on no-preload-814599: state=Stopped err=<nil>
	I0520 11:46:43.888958   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	W0520 11:46:43.889108   57519 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:43.891052   57519 out.go:177] * Restarting existing kvm2 VM for "no-preload-814599" ...
	I0520 11:46:42.521630   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.522114   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Found IP for machine: 192.168.61.221
	I0520 11:46:42.522140   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Reserving static IP address...
	I0520 11:46:42.522157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has current primary IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.522534   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Reserved static IP address: 192.168.61.221
	I0520 11:46:42.522556   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for SSH to be available...
	I0520 11:46:42.522571   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-668445", mac: "52:54:00:55:ba:46", ip: "192.168.61.221"} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.522593   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | skip adding static IP to network mk-default-k8s-diff-port-668445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-668445", mac: "52:54:00:55:ba:46", ip: "192.168.61.221"}
	I0520 11:46:42.522613   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Getting to WaitForSSH function...
	I0520 11:46:42.524812   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.525199   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.525221   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.525362   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Using SSH client type: external
	I0520 11:46:42.525391   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa (-rw-------)
	I0520 11:46:42.525427   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:46:42.525443   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | About to run SSH command:
	I0520 11:46:42.525455   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | exit 0
	I0520 11:46:42.648689   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | SSH cmd err, output: <nil>: 
	I0520 11:46:42.649122   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetConfigRaw
	I0520 11:46:42.649745   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:42.652220   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.652571   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.652596   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.652821   58840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:46:42.653026   58840 machine.go:94] provisionDockerMachine start ...
	I0520 11:46:42.653046   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:42.653235   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.655453   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.655758   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.655777   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.655923   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.656089   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.656253   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.656384   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.656551   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.656729   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.656739   58840 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:46:42.761195   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:46:42.761225   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:42.761450   58840 buildroot.go:166] provisioning hostname "default-k8s-diff-port-668445"
	I0520 11:46:42.761477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:42.761666   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.764340   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.764670   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.764695   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.764801   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.764973   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.765131   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.765285   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.765472   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.765634   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.765647   58840 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-668445 && echo "default-k8s-diff-port-668445" | sudo tee /etc/hostname
	I0520 11:46:42.883179   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-668445
	
	I0520 11:46:42.883204   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.885913   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.886302   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.886347   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.886487   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.886699   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.886884   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.887040   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.887238   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.887419   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.887444   58840 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-668445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-668445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-668445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:46:43.001769   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:46:43.001792   58840 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:46:43.001825   58840 buildroot.go:174] setting up certificates
	I0520 11:46:43.001834   58840 provision.go:84] configureAuth start
	I0520 11:46:43.001842   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:43.002064   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:43.004606   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.004897   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.004944   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.005087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.006929   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.007270   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.007296   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.007386   58840 provision.go:143] copyHostCerts
	I0520 11:46:43.007461   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:46:43.007470   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:46:43.007523   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:46:43.007608   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:46:43.007617   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:46:43.007637   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:46:43.007686   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:46:43.007693   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:46:43.007709   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:46:43.007753   58840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-668445 san=[127.0.0.1 192.168.61.221 default-k8s-diff-port-668445 localhost minikube]
	I0520 11:46:43.165780   58840 provision.go:177] copyRemoteCerts
	I0520 11:46:43.165835   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:43.165860   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.168782   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.169136   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.169181   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.169343   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.169575   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.169717   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.169892   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.252175   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:43.280313   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0520 11:46:43.306901   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:43.331191   58840 provision.go:87] duration metric: took 329.344445ms to configureAuth
	I0520 11:46:43.331227   58840 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:43.331478   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:43.331576   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.334173   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.334573   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.334603   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.334770   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.334961   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.335108   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.335236   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.335381   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:43.335593   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:43.335615   58840 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:43.630180   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:43.630213   58840 machine.go:97] duration metric: took 977.172725ms to provisionDockerMachine
	I0520 11:46:43.630229   58840 start.go:293] postStartSetup for "default-k8s-diff-port-668445" (driver="kvm2")
	I0520 11:46:43.630244   58840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:43.630270   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.630597   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:43.630635   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.633307   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.633650   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.633680   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.633798   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.633976   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.634141   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.634264   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.719350   58840 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:43.723320   58840 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:43.723339   58840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:43.723396   58840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:43.723478   58840 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:43.723564   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:43.732422   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:43.757138   58840 start.go:296] duration metric: took 126.894206ms for postStartSetup
	I0520 11:46:43.757179   58840 fix.go:56] duration metric: took 21.619082215s for fixHost
	I0520 11:46:43.757202   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.759486   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.759860   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.759887   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.760065   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.760248   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.760425   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.760527   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.760654   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:43.760824   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:43.760839   58840 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:43.865691   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205603.841593335
	
	I0520 11:46:43.865713   58840 fix.go:216] guest clock: 1716205603.841593335
	I0520 11:46:43.865722   58840 fix.go:229] Guest: 2024-05-20 11:46:43.841593335 +0000 UTC Remote: 2024-05-20 11:46:43.757183475 +0000 UTC m=+161.848941663 (delta=84.40986ms)
	I0520 11:46:43.865777   58840 fix.go:200] guest clock delta is within tolerance: 84.40986ms
	I0520 11:46:43.865790   58840 start.go:83] releasing machines lock for "default-k8s-diff-port-668445", held for 21.727726024s
	I0520 11:46:43.865824   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.866087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:43.868681   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.869054   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.869076   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.869256   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.869792   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.869961   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.870035   58840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:43.870091   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.870174   58840 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:43.870195   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.872817   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873059   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873146   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.873253   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873487   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.873496   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.873537   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873684   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.873687   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.873861   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.873866   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.874033   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.874046   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.874234   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	W0520 11:46:43.973840   58840 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:43.973913   58840 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:43.980706   58840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:44.132625   58840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:44.140558   58840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:44.140629   58840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:44.158922   58840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:44.158946   58840 start.go:494] detecting cgroup driver to use...
	I0520 11:46:44.159006   58840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:44.179206   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:44.194183   58840 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:44.194254   58840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:44.208701   58840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:44.223086   58840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:44.342811   58840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:44.491358   58840 docker.go:233] disabling docker service ...
	I0520 11:46:44.491428   58840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:44.509165   58840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:44.526088   58840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:44.700452   58840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:44.833129   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:44.849090   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:44.870148   58840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:46:44.870222   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.880951   58840 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:44.881016   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.895038   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.908245   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.918844   58840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:44.929838   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.940446   58840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.960350   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.974587   58840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:44.985689   58840 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:44.985748   58840 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:44.999929   58840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:45.010732   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:45.140038   58840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:45.291656   58840 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:45.291728   58840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:45.297351   58840 start.go:562] Will wait 60s for crictl version
	I0520 11:46:45.297412   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:46:45.301297   58840 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:45.343635   58840 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:45.343703   58840 ssh_runner.go:195] Run: crio --version
	I0520 11:46:45.373914   58840 ssh_runner.go:195] Run: crio --version
	I0520 11:46:45.404596   58840 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:46:43.892523   57519 main.go:141] libmachine: (no-preload-814599) Calling .Start
	I0520 11:46:43.892686   57519 main.go:141] libmachine: (no-preload-814599) Ensuring networks are active...
	I0520 11:46:43.893370   57519 main.go:141] libmachine: (no-preload-814599) Ensuring network default is active
	I0520 11:46:43.893729   57519 main.go:141] libmachine: (no-preload-814599) Ensuring network mk-no-preload-814599 is active
	I0520 11:46:43.894105   57519 main.go:141] libmachine: (no-preload-814599) Getting domain xml...
	I0520 11:46:43.894847   57519 main.go:141] libmachine: (no-preload-814599) Creating domain...
	I0520 11:46:45.229992   57519 main.go:141] libmachine: (no-preload-814599) Waiting to get IP...
	I0520 11:46:45.230981   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.231451   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.231518   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.231438   59753 retry.go:31] will retry after 187.539159ms: waiting for machine to come up
	I0520 11:46:45.420854   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.421369   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.421393   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.421328   59753 retry.go:31] will retry after 327.709711ms: waiting for machine to come up
	I0520 11:46:45.750799   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.751384   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.751414   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.751334   59753 retry.go:31] will retry after 445.096849ms: waiting for machine to come up
	I0520 11:46:46.197899   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:46.198567   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:46.198598   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:46.198536   59753 retry.go:31] will retry after 561.029629ms: waiting for machine to come up
	I0520 11:46:41.967129   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.467212   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.966637   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.466403   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.966371   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.466743   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.967293   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.467205   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:46.466728   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.405909   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:45.408865   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:45.409399   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:45.409429   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:45.409655   58840 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:45.413815   58840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:45.426264   58840 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:45.426366   58840 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:46:45.426405   58840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:45.468938   58840 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:46:45.469021   58840 ssh_runner.go:195] Run: which lz4
	I0520 11:46:45.474957   58840 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:45.481006   58840 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:45.481039   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:46:44.255728   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:44.756897   58398 node_ready.go:49] node "embed-certs-274976" has status "Ready":"True"
	I0520 11:46:44.756963   58398 node_ready.go:38] duration metric: took 7.005828991s for node "embed-certs-274976" to be "Ready" ...
	I0520 11:46:44.756978   58398 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:44.767584   58398 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:44.774049   58398 pod_ready.go:92] pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:44.774078   58398 pod_ready.go:81] duration metric: took 6.468023ms for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:44.774099   58398 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.795658   58398 pod_ready.go:92] pod "etcd-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:46.795691   58398 pod_ready.go:81] duration metric: took 2.021582833s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.795705   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.810711   58398 pod_ready.go:92] pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:46.810739   58398 pod_ready.go:81] duration metric: took 15.025215ms for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.810751   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.760855   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:46.761534   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:46.761579   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:46.761484   59753 retry.go:31] will retry after 497.075791ms: waiting for machine to come up
	I0520 11:46:47.260059   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:47.260542   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:47.260580   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:47.260479   59753 retry.go:31] will retry after 910.980097ms: waiting for machine to come up
	I0520 11:46:48.173677   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:48.174161   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:48.174189   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:48.174108   59753 retry.go:31] will retry after 931.969789ms: waiting for machine to come up
	I0520 11:46:49.107672   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:49.108241   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:49.108271   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:49.108194   59753 retry.go:31] will retry after 1.075042352s: waiting for machine to come up
	I0520 11:46:50.184518   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:50.185078   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:50.185108   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:50.185025   59753 retry.go:31] will retry after 1.174646184s: waiting for machine to come up
	I0520 11:46:46.967320   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.466459   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.966641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.467404   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.966845   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.966601   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.466616   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.967389   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:51.466668   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.010637   58840 crio.go:462] duration metric: took 1.535717978s to copy over tarball
	I0520 11:46:47.010719   58840 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:49.308712   58840 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.297962166s)
	I0520 11:46:49.308747   58840 crio.go:469] duration metric: took 2.298080001s to extract the tarball
	I0520 11:46:49.308756   58840 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:49.346851   58840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:49.393090   58840 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:46:49.393118   58840 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:46:49.393126   58840 kubeadm.go:928] updating node { 192.168.61.221 8444 v1.30.1 crio true true} ...
	I0520 11:46:49.393247   58840 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-668445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:49.393326   58840 ssh_runner.go:195] Run: crio config
	I0520 11:46:49.438553   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:46:49.438575   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:49.438592   58840 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:49.438618   58840 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.221 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-668445 NodeName:default-k8s-diff-port-668445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:46:49.438792   58840 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-668445"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:49.438876   58840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:46:49.449415   58840 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:49.449487   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:49.460951   58840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0520 11:46:49.479654   58840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:49.497494   58840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0520 11:46:49.518474   58840 ssh_runner.go:195] Run: grep 192.168.61.221	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:49.522598   58840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:49.537237   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:49.686022   58840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:49.712850   58840 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445 for IP: 192.168.61.221
	I0520 11:46:49.712871   58840 certs.go:194] generating shared ca certs ...
	I0520 11:46:49.712885   58840 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:49.713065   58840 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:49.713110   58840 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:49.713123   58840 certs.go:256] generating profile certs ...
	I0520 11:46:49.713261   58840 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/client.key
	I0520 11:46:49.713346   58840 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.key.50005da2
	I0520 11:46:49.713390   58840 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.key
	I0520 11:46:49.713548   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:49.713583   58840 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:49.713597   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:49.713629   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:49.713656   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:49.713686   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:49.713744   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:49.714370   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:49.751367   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:49.787162   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:49.822660   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:49.863401   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 11:46:49.895295   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:49.928089   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:49.952687   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:46:49.982562   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:50.007234   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:50.031099   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:50.055146   58840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:50.073066   58840 ssh_runner.go:195] Run: openssl version
	I0520 11:46:50.080951   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:50.092134   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.097436   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.097496   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.103714   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:50.114321   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:50.125104   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.131225   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.131276   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.139113   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:50.153959   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:50.165441   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.170302   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.170349   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.176388   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:50.188372   58840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:50.193168   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:50.199298   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:50.208100   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:50.214704   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:50.220565   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:50.226463   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:50.232400   58840 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:50.232472   58840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:50.232532   58840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:50.271977   58840 cri.go:89] found id: ""
	I0520 11:46:50.272070   58840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:50.283676   58840 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:50.283699   58840 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:50.283706   58840 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:50.283753   58840 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:50.296808   58840 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:50.298406   58840 kubeconfig.go:125] found "default-k8s-diff-port-668445" server: "https://192.168.61.221:8444"
	I0520 11:46:50.300563   58840 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:50.310940   58840 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.221
	I0520 11:46:50.310964   58840 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:50.310975   58840 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:50.311010   58840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:50.361321   58840 cri.go:89] found id: ""
	I0520 11:46:50.361400   58840 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:50.382699   58840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:50.397635   58840 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:50.397657   58840 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:50.397707   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0520 11:46:50.407303   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:50.407364   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:50.418331   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0520 11:46:50.427889   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:50.427938   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:50.437649   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0520 11:46:50.446810   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:50.446861   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:50.456428   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0520 11:46:50.465444   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:50.465512   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:50.478585   58840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:50.489053   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:50.607045   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.346500   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.594241   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.661179   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.751253   58840 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:51.751331   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.318166   58398 pod_ready.go:92] pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.318193   58398 pod_ready.go:81] duration metric: took 1.507434439s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.318207   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.323814   58398 pod_ready.go:92] pod "kube-proxy-6n9d5" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.323850   58398 pod_ready.go:81] duration metric: took 5.632938ms for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.323865   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.356208   58398 pod_ready.go:92] pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.356235   58398 pod_ready.go:81] duration metric: took 32.360735ms for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.356247   58398 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:50.363910   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:52.364530   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:51.360905   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:51.482451   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:51.482493   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:51.361293   59753 retry.go:31] will retry after 1.455979778s: waiting for machine to come up
	I0520 11:46:52.818580   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:52.819134   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:52.819162   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:52.819073   59753 retry.go:31] will retry after 2.771639261s: waiting for machine to come up
	I0520 11:46:55.593581   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:55.594138   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:55.594167   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:55.594089   59753 retry.go:31] will retry after 2.881341208s: waiting for machine to come up
	I0520 11:46:51.967150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.466994   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.967190   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.467334   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.967039   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.466506   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.966828   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.967263   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:56.466949   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.252101   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.752498   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.794795   58840 api_server.go:72] duration metric: took 1.043540264s to wait for apiserver process to appear ...
	I0520 11:46:52.794823   58840 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:46:52.794847   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:52.795569   58840 api_server.go:269] stopped: https://192.168.61.221:8444/healthz: Get "https://192.168.61.221:8444/healthz": dial tcp 192.168.61.221:8444: connect: connection refused
	I0520 11:46:53.294972   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.015833   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:56.015872   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:56.015889   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.053674   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:56.053705   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:56.294988   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.300440   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:56.300474   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:56.795720   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.813192   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:56.813221   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:57.295045   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:57.298962   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:57.298994   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:57.795590   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:57.800795   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 200:
	ok
	I0520 11:46:57.809060   58840 api_server.go:141] control plane version: v1.30.1
	I0520 11:46:57.809084   58840 api_server.go:131] duration metric: took 5.014253865s to wait for apiserver health ...
	I0520 11:46:57.809092   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:46:57.809098   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:57.811020   58840 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:46:54.864263   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:57.363797   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:57.812202   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:46:57.824260   58840 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:46:57.843189   58840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:46:57.851978   58840 system_pods.go:59] 8 kube-system pods found
	I0520 11:46:57.852012   58840 system_pods.go:61] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:46:57.852029   58840 system_pods.go:61] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:46:57.852035   58840 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:46:57.852042   58840 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:46:57.852046   58840 system_pods.go:61] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:46:57.852055   58840 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:46:57.852063   58840 system_pods.go:61] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:46:57.852067   58840 system_pods.go:61] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:46:57.852073   58840 system_pods.go:74] duration metric: took 8.872046ms to wait for pod list to return data ...
	I0520 11:46:57.852084   58840 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:46:57.855020   58840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:46:57.855043   58840 node_conditions.go:123] node cpu capacity is 2
	I0520 11:46:57.855052   58840 node_conditions.go:105] duration metric: took 2.95984ms to run NodePressure ...
	I0520 11:46:57.855065   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:58.124410   58840 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:46:58.129201   58840 kubeadm.go:733] kubelet initialised
	I0520 11:46:58.129222   58840 kubeadm.go:734] duration metric: took 4.789292ms waiting for restarted kubelet to initialise ...
	I0520 11:46:58.129229   58840 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:58.134883   58840 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.139522   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.139557   58840 pod_ready.go:81] duration metric: took 4.638772ms for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.139569   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.139583   58840 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.143636   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.143657   58840 pod_ready.go:81] duration metric: took 4.061011ms for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.143669   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.143677   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.147479   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.147499   58840 pod_ready.go:81] duration metric: took 3.813494ms for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.147510   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.147517   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.247677   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.247710   58840 pod_ready.go:81] duration metric: took 100.183226ms for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.247723   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.247730   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.648289   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-proxy-7ddtk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.648339   58840 pod_ready.go:81] duration metric: took 400.599602ms for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.648352   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-proxy-7ddtk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.648365   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:59.047071   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.047099   58840 pod_ready.go:81] duration metric: took 398.721478ms for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:59.047108   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.047114   58840 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:59.447840   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.447866   58840 pod_ready.go:81] duration metric: took 400.738125ms for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:59.447876   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.447885   58840 pod_ready.go:38] duration metric: took 1.318647527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:59.447899   58840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:46:59.459985   58840 ops.go:34] apiserver oom_adj: -16
	I0520 11:46:59.460011   58840 kubeadm.go:591] duration metric: took 9.176298972s to restartPrimaryControlPlane
	I0520 11:46:59.460023   58840 kubeadm.go:393] duration metric: took 9.227629245s to StartCluster
	I0520 11:46:59.460038   58840 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:59.460120   58840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:59.461897   58840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:59.462172   58840 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:46:59.464109   58840 out.go:177] * Verifying Kubernetes components...
	I0520 11:46:59.462263   58840 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:46:59.462375   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:59.465582   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:59.464212   58840 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465645   58840 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.465657   58840 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:46:59.465693   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.464218   58840 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465752   58840 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.465768   58840 addons.go:243] addon metrics-server should already be in state true
	I0520 11:46:59.465794   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.464219   58840 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465877   58840 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-668445"
	I0520 11:46:59.466087   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466116   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.466210   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466216   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466232   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.466235   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.481963   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0520 11:46:59.482030   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45203
	I0520 11:46:59.482551   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.482768   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.483115   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.483140   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.483495   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.483516   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.483759   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.483930   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.483979   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.484484   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.484509   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.485272   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0520 11:46:59.485657   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.486686   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.486716   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.487080   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.487552   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.487588   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.487989   58840 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.488007   58840 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:46:59.488037   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.488349   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.488384   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.500796   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0520 11:46:59.501376   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.501793   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0520 11:46:59.502058   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.502098   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.502308   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.502439   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.502661   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.502737   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0520 11:46:59.502976   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.503009   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.503199   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.503329   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.503548   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.503648   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.503659   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.504278   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.504632   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.504814   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.506755   58840 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:46:59.504875   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.505278   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.508201   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:46:59.508219   58840 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:46:59.508238   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.509465   58840 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:59.510750   58840 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:59.510765   58840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:46:59.510778   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.511023   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.511444   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.511480   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.511707   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.511939   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.512117   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.512271   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.513612   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.513980   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.514009   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.514207   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.514354   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.514460   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.514564   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.523003   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I0520 11:46:59.523361   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.523835   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.523858   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.524115   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.524266   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.525489   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.525661   58840 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:59.525673   58840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:46:59.525685   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.527762   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.528064   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.528087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.528157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.528309   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.528414   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.528559   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.652795   58840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:59.672785   58840 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-668445" to be "Ready" ...
	I0520 11:46:59.759683   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:59.760767   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:59.775512   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:46:59.775536   58840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:46:59.807608   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:46:59.807633   58840 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:46:59.835597   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:59.835624   58840 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:46:59.862717   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:47:00.145002   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.145032   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.145352   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.145352   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.145372   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.145380   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.145389   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.145645   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.145700   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.145717   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.150992   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.151008   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.151358   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.151376   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.151385   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.802987   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803016   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803030   58840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.042235492s)
	I0520 11:47:00.803063   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803081   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803354   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.803386   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803404   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803422   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803424   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803434   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803439   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803449   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803464   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803621   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803631   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803642   58840 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-668445"
	I0520 11:47:00.803677   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.803729   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803743   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.805628   58840 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0520 11:46:58.478246   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:58.478733   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:58.478765   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:58.478670   59753 retry.go:31] will retry after 3.220647355s: waiting for machine to come up
	I0520 11:46:56.966878   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.966485   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.466861   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.967023   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.467329   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.966889   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.466799   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.966521   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:01.466922   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.806818   58840 addons.go:505] duration metric: took 1.344560138s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0520 11:47:01.675861   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.863455   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:01.863859   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:01.702053   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.702503   57519 main.go:141] libmachine: (no-preload-814599) Found IP for machine: 192.168.39.185
	I0520 11:47:01.702519   57519 main.go:141] libmachine: (no-preload-814599) Reserving static IP address...
	I0520 11:47:01.702532   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has current primary IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.702964   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.702994   57519 main.go:141] libmachine: (no-preload-814599) Reserved static IP address: 192.168.39.185
	I0520 11:47:01.703017   57519 main.go:141] libmachine: (no-preload-814599) DBG | skip adding static IP to network mk-no-preload-814599 - found existing host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"}
	I0520 11:47:01.703036   57519 main.go:141] libmachine: (no-preload-814599) DBG | Getting to WaitForSSH function...
	I0520 11:47:01.703048   57519 main.go:141] libmachine: (no-preload-814599) Waiting for SSH to be available...
	I0520 11:47:01.705097   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.705376   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.705409   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.705482   57519 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH client type: external
	I0520 11:47:01.705511   57519 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa (-rw-------)
	I0520 11:47:01.705543   57519 main.go:141] libmachine: (no-preload-814599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:47:01.705558   57519 main.go:141] libmachine: (no-preload-814599) DBG | About to run SSH command:
	I0520 11:47:01.705593   57519 main.go:141] libmachine: (no-preload-814599) DBG | exit 0
	I0520 11:47:01.833182   57519 main.go:141] libmachine: (no-preload-814599) DBG | SSH cmd err, output: <nil>: 
	I0520 11:47:01.833551   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetConfigRaw
	I0520 11:47:01.834149   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:01.836532   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.837008   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.837047   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.837310   57519 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json ...
	I0520 11:47:01.837509   57519 machine.go:94] provisionDockerMachine start ...
	I0520 11:47:01.837527   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:01.837717   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:01.839814   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.840151   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.840171   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.840335   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:01.840485   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.840631   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.840781   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:01.840958   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:01.841138   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:01.841149   57519 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:47:01.953190   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:47:01.953221   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:01.953480   57519 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:47:01.953504   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:01.953667   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:01.956155   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.956487   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.956518   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.956684   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:01.956884   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.957071   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.957239   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:01.957416   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:01.957580   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:01.957593   57519 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-814599 && echo "no-preload-814599" | sudo tee /etc/hostname
	I0520 11:47:02.079993   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-814599
	
	I0520 11:47:02.080018   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.082623   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.082950   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.082984   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.083154   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.083353   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.083520   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.083677   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.083852   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:02.084014   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:02.084029   57519 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-814599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-814599/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-814599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:47:02.204190   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:47:02.204225   57519 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:47:02.204252   57519 buildroot.go:174] setting up certificates
	I0520 11:47:02.204264   57519 provision.go:84] configureAuth start
	I0520 11:47:02.204280   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:02.204562   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:02.207132   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.207477   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.207501   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.207633   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.209905   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.210212   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.210231   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.210414   57519 provision.go:143] copyHostCerts
	I0520 11:47:02.210472   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:47:02.210493   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:47:02.210560   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:47:02.210743   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:47:02.210758   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:47:02.210795   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:47:02.210875   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:47:02.210883   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:47:02.210905   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:47:02.210962   57519 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.no-preload-814599 san=[127.0.0.1 192.168.39.185 localhost minikube no-preload-814599]
	I0520 11:47:02.411383   57519 provision.go:177] copyRemoteCerts
	I0520 11:47:02.411436   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:47:02.411458   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.414071   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.414443   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.414473   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.414675   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.414859   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.415006   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.415125   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:02.507148   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:47:02.536944   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0520 11:47:02.562291   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:47:02.589034   57519 provision.go:87] duration metric: took 384.752486ms to configureAuth
	I0520 11:47:02.589092   57519 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:47:02.589320   57519 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:47:02.589412   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.592287   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.592704   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.592731   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.592957   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.593180   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.593368   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.593561   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.593724   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:02.593910   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:02.593931   57519 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:47:02.899943   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:47:02.900037   57519 machine.go:97] duration metric: took 1.062511085s to provisionDockerMachine
	I0520 11:47:02.900074   57519 start.go:293] postStartSetup for "no-preload-814599" (driver="kvm2")
	I0520 11:47:02.900112   57519 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:47:02.900155   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:02.900546   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:47:02.900581   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.903643   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.904057   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.904089   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.904253   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.904433   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.904580   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.904710   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:02.999752   57519 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:47:03.005832   57519 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:47:03.005856   57519 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:47:03.005940   57519 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:47:03.006029   57519 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:47:03.006145   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:47:03.016853   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:47:03.049845   57519 start.go:296] duration metric: took 149.725ms for postStartSetup
	I0520 11:47:03.049886   57519 fix.go:56] duration metric: took 19.183954619s for fixHost
	I0520 11:47:03.049912   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.052841   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.053175   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.053203   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.053342   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.053537   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.053730   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.053899   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.054048   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:03.054227   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:03.054241   57519 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:47:03.166461   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205623.121199957
	
	I0520 11:47:03.166490   57519 fix.go:216] guest clock: 1716205623.121199957
	I0520 11:47:03.166500   57519 fix.go:229] Guest: 2024-05-20 11:47:03.121199957 +0000 UTC Remote: 2024-05-20 11:47:03.049891315 +0000 UTC m=+366.742654246 (delta=71.308642ms)
	I0520 11:47:03.166523   57519 fix.go:200] guest clock delta is within tolerance: 71.308642ms
	I0520 11:47:03.166530   57519 start.go:83] releasing machines lock for "no-preload-814599", held for 19.30063132s
	I0520 11:47:03.166554   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.166858   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:03.169486   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.169816   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.169861   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.169990   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170546   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170728   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170814   57519 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:47:03.170875   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.170920   57519 ssh_runner.go:195] Run: cat /version.json
	I0520 11:47:03.170944   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.173717   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.173944   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174128   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.174153   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174338   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.174365   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174371   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.174570   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.174571   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.174781   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.174793   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.174956   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.175049   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:03.175169   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	W0520 11:47:03.276456   57519 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:47:03.276537   57519 ssh_runner.go:195] Run: systemctl --version
	I0520 11:47:03.282946   57519 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:47:03.428948   57519 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:47:03.436462   57519 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:47:03.436529   57519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:47:03.453022   57519 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:47:03.453045   57519 start.go:494] detecting cgroup driver to use...
	I0520 11:47:03.453103   57519 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:47:03.469505   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:47:03.483617   57519 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:47:03.483673   57519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:47:03.498514   57519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:47:03.513169   57519 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:47:03.638355   57519 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:47:03.819407   57519 docker.go:233] disabling docker service ...
	I0520 11:47:03.819473   57519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:47:03.837928   57519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:47:03.853747   57519 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:47:03.976480   57519 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:47:04.097685   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:47:04.112917   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:47:04.132979   57519 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:47:04.133044   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.145501   57519 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:47:04.145594   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.157957   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.168774   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.180521   57519 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:47:04.191739   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.203232   57519 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.221386   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.231999   57519 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:47:04.243401   57519 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:47:04.243447   57519 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:47:04.259232   57519 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:47:04.270945   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:47:04.387908   57519 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:47:04.526507   57519 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:47:04.526588   57519 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:47:04.531568   57519 start.go:562] Will wait 60s for crictl version
	I0520 11:47:04.531628   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:04.535751   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:47:04.583384   57519 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:47:04.583470   57519 ssh_runner.go:195] Run: crio --version
	I0520 11:47:04.617545   57519 ssh_runner.go:195] Run: crio --version
	I0520 11:47:04.652685   57519 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:47:04.654199   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:04.657274   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:04.657624   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:04.657653   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:04.657858   57519 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 11:47:04.662344   57519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:47:04.676310   57519 kubeadm.go:877] updating cluster {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:47:04.676437   57519 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:47:04.676480   57519 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:47:04.723427   57519 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:47:04.723452   57519 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:47:04.723502   57519 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:04.723705   57519 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:04.723729   57519 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:04.723773   57519 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:04.723829   57519 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:04.723896   57519 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:04.723913   57519 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 11:47:04.723767   57519 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 11:47:04.725657   57519 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:04.725657   57519 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:04.725737   57519 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:04.725630   57519 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:04.725696   57519 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.849687   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.896118   57519 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0520 11:47:04.896166   57519 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.896214   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:04.901457   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.940481   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 11:47:04.940581   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.946097   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0520 11:47:04.946120   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.946168   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.958539   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:05.058882   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:05.059155   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:05.061672   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0520 11:47:05.082727   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:05.087840   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:05.606193   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:01.966435   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.467352   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.967335   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.467049   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.466715   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.966960   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.466641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.966993   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:06.466684   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.677231   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:47:06.176977   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:47:06.677184   58840 node_ready.go:49] node "default-k8s-diff-port-668445" has status "Ready":"True"
	I0520 11:47:06.677208   58840 node_ready.go:38] duration metric: took 7.004390787s for node "default-k8s-diff-port-668445" to be "Ready" ...
	I0520 11:47:06.677217   58840 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:47:06.684890   58840 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.690323   58840 pod_ready.go:92] pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.690345   58840 pod_ready.go:81] duration metric: took 5.420403ms for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.690354   58840 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.695944   58840 pod_ready.go:92] pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.695968   58840 pod_ready.go:81] duration metric: took 5.606311ms for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.695979   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.701932   58840 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.701953   58840 pod_ready.go:81] duration metric: took 5.965003ms for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.701966   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:04.363354   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:06.363448   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:08.125249   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1: (3.166660678s)
	I0520 11:47:08.125272   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (3.179080546s)
	I0520 11:47:08.125294   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0520 11:47:08.125314   57519 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0520 11:47:08.125343   57519 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:08.125346   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1: (3.066428315s)
	I0520 11:47:08.125383   57519 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0520 11:47:08.125398   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125415   57519 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:08.125415   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (3.066238572s)
	I0520 11:47:08.125472   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125517   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9: (3.063820229s)
	I0520 11:47:08.125478   57519 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0520 11:47:08.125582   57519 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:08.125596   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1: (3.04284016s)
	I0520 11:47:08.125612   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125637   57519 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0520 11:47:08.125659   57519 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:08.125668   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1: (3.037804525s)
	I0520 11:47:08.125696   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125707   57519 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0520 11:47:08.125709   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.519483497s)
	I0520 11:47:08.125727   57519 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:08.125758   57519 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0520 11:47:08.125775   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125808   57519 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:08.125843   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.140377   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:08.140435   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:08.140472   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:08.140518   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:08.140550   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:08.140604   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:08.277671   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 11:47:08.277785   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.286159   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 11:47:08.286163   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 11:47:08.286261   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:08.286297   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:08.287554   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0520 11:47:08.287623   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:08.300162   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 11:47:08.300183   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0520 11:47:08.300197   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.300203   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0520 11:47:08.300221   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0520 11:47:08.300235   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0520 11:47:08.300168   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 11:47:08.300245   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.300254   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:08.300315   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:08.304669   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0520 11:47:10.580290   57519 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.279947927s)
	I0520 11:47:10.580392   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0520 11:47:10.580421   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.280153203s)
	I0520 11:47:10.580444   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0520 11:47:10.580466   57519 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:10.580514   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:06.966995   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.467261   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.967171   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.467104   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.967112   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.466446   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.967351   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:10.467225   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:10.467292   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:10.507451   58316 cri.go:89] found id: ""
	I0520 11:47:10.507477   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.507487   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:10.507494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:10.507557   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:10.548291   58316 cri.go:89] found id: ""
	I0520 11:47:10.548317   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.548329   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:10.548337   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:10.548401   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:10.592870   58316 cri.go:89] found id: ""
	I0520 11:47:10.592899   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.592909   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:10.592916   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:10.592997   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:10.637344   58316 cri.go:89] found id: ""
	I0520 11:47:10.637376   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.637387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:10.637395   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:10.637466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:10.679759   58316 cri.go:89] found id: ""
	I0520 11:47:10.679794   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.679802   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:10.679807   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:10.679868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:10.720179   58316 cri.go:89] found id: ""
	I0520 11:47:10.720215   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.720250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:10.720259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:10.720324   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:10.761279   58316 cri.go:89] found id: ""
	I0520 11:47:10.761308   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.761320   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:10.761327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:10.761383   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:10.799008   58316 cri.go:89] found id: ""
	I0520 11:47:10.799037   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.799048   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:10.799059   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:10.799072   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:10.850507   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:10.850537   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:10.867287   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:10.867317   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:11.007373   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:11.007401   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:11.007420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:11.090037   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:11.090074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:07.717364   58840 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:07.717391   58840 pod_ready.go:81] duration metric: took 1.01541621s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:07.717404   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:08.041632   58840 pod_ready.go:92] pod "kube-proxy-7ddtk" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:08.041657   58840 pod_ready.go:81] duration metric: took 324.24633ms for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:08.041669   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:10.047877   58840 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:08.364366   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:10.366662   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:12.895005   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:11.650206   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.069627914s)
	I0520 11:47:11.650241   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 11:47:11.650265   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:11.650322   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:13.011174   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.360821671s)
	I0520 11:47:13.011203   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0520 11:47:13.011225   57519 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:13.011270   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:13.633150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:13.650425   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:13.650494   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:13.690842   58316 cri.go:89] found id: ""
	I0520 11:47:13.690869   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.690880   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:13.690887   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:13.690947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:13.727462   58316 cri.go:89] found id: ""
	I0520 11:47:13.727492   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.727504   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:13.727511   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:13.727570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:13.765767   58316 cri.go:89] found id: ""
	I0520 11:47:13.765797   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.765807   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:13.765816   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:13.765877   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:13.800210   58316 cri.go:89] found id: ""
	I0520 11:47:13.800239   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.800248   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:13.800255   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:13.800316   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:13.836481   58316 cri.go:89] found id: ""
	I0520 11:47:13.836513   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.836525   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:13.836533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:13.836601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:13.881888   58316 cri.go:89] found id: ""
	I0520 11:47:13.881914   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.881923   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:13.881932   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:13.881990   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:13.916896   58316 cri.go:89] found id: ""
	I0520 11:47:13.916955   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.916966   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:13.916973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:13.917037   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:13.950345   58316 cri.go:89] found id: ""
	I0520 11:47:13.950372   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.950381   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:13.950391   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:13.950415   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:14.039076   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:14.039114   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:14.086767   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:14.086794   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:14.140377   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:14.140418   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:14.158036   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:14.158074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:14.255195   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:12.048869   58840 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:12.048894   58840 pod_ready.go:81] duration metric: took 4.007218109s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:12.048903   58840 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:14.057711   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:16.555471   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:15.365437   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:17.864290   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:16.885885   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.874589428s)
	I0520 11:47:16.885917   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0520 11:47:16.885952   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:16.886034   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:18.674100   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.788037677s)
	I0520 11:47:18.674128   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0520 11:47:18.674149   57519 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:18.674197   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:20.543671   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.86944462s)
	I0520 11:47:20.543705   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0520 11:47:20.543733   57519 cache_images.go:123] Successfully loaded all cached images
	I0520 11:47:20.543739   57519 cache_images.go:92] duration metric: took 15.820273224s to LoadCachedImages
	I0520 11:47:20.543749   57519 kubeadm.go:928] updating node { 192.168.39.185 8443 v1.30.1 crio true true} ...
	I0520 11:47:20.543863   57519 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-814599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:47:20.543944   57519 ssh_runner.go:195] Run: crio config
	I0520 11:47:20.593647   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:47:20.593672   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:47:20.593692   57519 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:47:20.593719   57519 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-814599 NodeName:no-preload-814599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:47:20.593863   57519 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-814599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:47:20.593936   57519 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:47:20.604739   57519 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:47:20.604791   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:47:20.614029   57519 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 11:47:20.630587   57519 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:47:20.647296   57519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0520 11:47:20.664453   57519 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0520 11:47:20.668296   57519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:47:20.681086   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:47:20.816542   57519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:47:20.834344   57519 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599 for IP: 192.168.39.185
	I0520 11:47:20.834369   57519 certs.go:194] generating shared ca certs ...
	I0520 11:47:20.834388   57519 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:47:20.834578   57519 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:47:20.834634   57519 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:47:20.834649   57519 certs.go:256] generating profile certs ...
	I0520 11:47:20.834777   57519 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.key
	I0520 11:47:20.834876   57519 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5
	I0520 11:47:20.834928   57519 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key
	I0520 11:47:20.835090   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:47:20.835146   57519 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:47:20.835159   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:47:20.835192   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:47:20.835226   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:47:20.835255   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:47:20.835314   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:47:20.836325   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:47:20.883227   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:47:20.918566   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:47:20.959097   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:47:21.006422   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 11:47:21.035717   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:47:21.071087   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:47:21.094330   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:47:21.119597   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:47:21.145488   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:47:21.173152   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:47:21.197627   57519 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:47:21.214595   57519 ssh_runner.go:195] Run: openssl version
	I0520 11:47:21.220678   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:47:21.232251   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.236953   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.237015   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.243048   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:47:21.253667   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:47:21.264223   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.268607   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.268648   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.274112   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:47:21.284427   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:47:21.294811   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.299403   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.299455   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.304954   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:47:21.315320   57519 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:47:21.319956   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:47:21.325692   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:47:21.331232   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:47:21.337240   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:47:16.755906   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:16.772659   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:16.772736   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:16.821927   58316 cri.go:89] found id: ""
	I0520 11:47:16.821954   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.821964   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:16.821971   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:16.822033   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:16.863636   58316 cri.go:89] found id: ""
	I0520 11:47:16.863663   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.863673   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:16.863681   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:16.863749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:16.900524   58316 cri.go:89] found id: ""
	I0520 11:47:16.900553   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.900561   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:16.900566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:16.900625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:16.949484   58316 cri.go:89] found id: ""
	I0520 11:47:16.949517   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.949527   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:16.949535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:16.949597   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:16.994006   58316 cri.go:89] found id: ""
	I0520 11:47:16.994035   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.994045   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:16.994054   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:16.994111   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:17.030100   58316 cri.go:89] found id: ""
	I0520 11:47:17.030130   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.030141   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:17.030147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:17.030195   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:17.069067   58316 cri.go:89] found id: ""
	I0520 11:47:17.069089   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.069097   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:17.069102   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:17.069146   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:17.105102   58316 cri.go:89] found id: ""
	I0520 11:47:17.105131   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.105141   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:17.105151   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:17.105165   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:17.159912   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:17.159937   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:17.212192   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:17.212227   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:17.227761   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:17.227802   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:17.305724   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:17.305753   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:17.305770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:19.886189   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:19.900397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:19.900475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:19.940473   58316 cri.go:89] found id: ""
	I0520 11:47:19.940502   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.940509   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:19.940533   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:19.940582   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:19.985561   58316 cri.go:89] found id: ""
	I0520 11:47:19.985594   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.985605   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:19.985611   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:19.985676   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:20.040865   58316 cri.go:89] found id: ""
	I0520 11:47:20.040895   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.040906   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:20.040913   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:20.040991   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:20.086122   58316 cri.go:89] found id: ""
	I0520 11:47:20.086149   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.086159   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:20.086167   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:20.086228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:20.122905   58316 cri.go:89] found id: ""
	I0520 11:47:20.122931   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.122938   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:20.122943   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:20.123055   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:20.165966   58316 cri.go:89] found id: ""
	I0520 11:47:20.165993   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.166004   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:20.166012   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:20.166064   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:20.206010   58316 cri.go:89] found id: ""
	I0520 11:47:20.206043   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.206054   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:20.206062   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:20.206126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:20.243540   58316 cri.go:89] found id: ""
	I0520 11:47:20.243565   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.243573   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:20.243582   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:20.243595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:20.327278   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:20.327298   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:20.327313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:20.423064   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:20.423096   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:20.467707   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:20.467742   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:20.522746   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:20.522782   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:18.556589   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:20.557476   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:20.363536   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:22.862952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:21.342881   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:47:21.349172   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:47:21.354721   57519 kubeadm.go:391] StartCluster: {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:47:21.354839   57519 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:47:21.354923   57519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:47:21.399473   57519 cri.go:89] found id: ""
	I0520 11:47:21.399554   57519 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:47:21.413632   57519 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:47:21.413656   57519 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:47:21.413662   57519 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:47:21.413712   57519 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:47:21.425556   57519 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:47:21.426515   57519 kubeconfig.go:125] found "no-preload-814599" server: "https://192.168.39.185:8443"
	I0520 11:47:21.428556   57519 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:47:21.438628   57519 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.185
	I0520 11:47:21.438655   57519 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:47:21.438668   57519 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:47:21.438708   57519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:47:21.485080   57519 cri.go:89] found id: ""
	I0520 11:47:21.485149   57519 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:47:21.502591   57519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:47:21.512148   57519 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:47:21.512168   57519 kubeadm.go:156] found existing configuration files:
	
	I0520 11:47:21.512208   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:47:21.521378   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:47:21.521434   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:47:21.531568   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:47:21.541404   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:47:21.541459   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:47:21.552018   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:47:21.562667   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:47:21.562711   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:47:21.572240   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:47:21.581472   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:47:21.581535   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:47:21.591072   57519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:47:21.601114   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:21.716435   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.597273   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.792224   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.866757   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.999294   57519 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:47:22.999389   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.499642   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:24.000424   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:24.022646   57519 api_server.go:72] duration metric: took 1.023349447s to wait for apiserver process to appear ...
	I0520 11:47:24.022673   57519 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:47:24.022691   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:24.023162   57519 api_server.go:269] stopped: https://192.168.39.185:8443/healthz: Get "https://192.168.39.185:8443/healthz": dial tcp 192.168.39.185:8443: connect: connection refused
	I0520 11:47:24.523760   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:23.037165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.051438   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:23.051490   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:23.088156   58316 cri.go:89] found id: ""
	I0520 11:47:23.088186   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.088197   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:23.088204   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:23.088266   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:23.123357   58316 cri.go:89] found id: ""
	I0520 11:47:23.123384   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.123394   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:23.123401   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:23.123461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:23.159579   58316 cri.go:89] found id: ""
	I0520 11:47:23.159612   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.159622   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:23.159630   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:23.159694   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:23.195471   58316 cri.go:89] found id: ""
	I0520 11:47:23.195507   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.195518   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:23.195525   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:23.195584   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:23.231121   58316 cri.go:89] found id: ""
	I0520 11:47:23.231148   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.231159   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:23.231165   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:23.231225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:23.267809   58316 cri.go:89] found id: ""
	I0520 11:47:23.267833   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.267841   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:23.267847   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:23.267897   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:23.306589   58316 cri.go:89] found id: ""
	I0520 11:47:23.306608   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.306615   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:23.306621   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:23.306678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:23.339285   58316 cri.go:89] found id: ""
	I0520 11:47:23.339308   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.339317   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:23.339329   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:23.339343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:23.406157   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:23.406196   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:23.420874   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:23.420911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:23.503049   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:23.503077   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:23.503092   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:23.585189   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:23.585222   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:26.130594   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:26.146278   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:26.146366   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:26.189721   58316 cri.go:89] found id: ""
	I0520 11:47:26.189747   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.189754   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:26.189760   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:26.189809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:26.228060   58316 cri.go:89] found id: ""
	I0520 11:47:26.228084   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.228098   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:26.228104   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:26.228171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:26.262360   58316 cri.go:89] found id: ""
	I0520 11:47:26.262390   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.262400   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:26.262407   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:26.262471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:26.297981   58316 cri.go:89] found id: ""
	I0520 11:47:26.298010   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.298022   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:26.298029   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:26.298094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:26.339873   58316 cri.go:89] found id: ""
	I0520 11:47:26.339901   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.339911   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:26.339918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:26.339983   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:26.376366   58316 cri.go:89] found id: ""
	I0520 11:47:26.376393   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.376404   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:26.376411   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:26.376471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:26.410803   58316 cri.go:89] found id: ""
	I0520 11:47:26.410839   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.410851   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:26.410859   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:26.410921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:26.445392   58316 cri.go:89] found id: ""
	I0520 11:47:26.445420   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.445432   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:26.445443   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:26.445458   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:26.499244   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:26.499276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:26.513224   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:26.513259   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:26.589623   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:26.589647   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:26.589661   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:26.676875   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:26.676918   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:23.055887   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:25.555044   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:24.863476   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:27.363148   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:27.158548   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:47:27.158572   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:47:27.158585   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:27.205830   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:47:27.205867   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:47:27.523283   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:27.527618   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:27.527643   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:28.023238   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:28.029601   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:28.029630   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:28.522849   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:28.528350   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:28.528372   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:29.023422   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:29.027460   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:29.027493   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:29.523739   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:29.528289   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:29.528317   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:30.022859   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:30.027324   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:30.027348   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:30.522929   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:30.527596   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:47:30.534571   57519 api_server.go:141] control plane version: v1.30.1
	I0520 11:47:30.534592   57519 api_server.go:131] duration metric: took 6.511912744s to wait for apiserver health ...
	I0520 11:47:30.534601   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:47:30.534609   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:47:30.536647   57519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:47:30.537896   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:47:30.548779   57519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:47:30.568124   57519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:47:30.577489   57519 system_pods.go:59] 8 kube-system pods found
	I0520 11:47:30.577522   57519 system_pods.go:61] "coredns-7db6d8ff4d-tg5bl" [ec723a9c-933c-4d25-9b56-e39a78adbbb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:47:30.577530   57519 system_pods.go:61] "etcd-no-preload-814599" [d0e5ca15-cc81-4dc2-9bc4-0e9b88c7b49d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:47:30.577538   57519 system_pods.go:61] "kube-apiserver-no-preload-814599" [81f50182-5190-4168-b97a-f5a7a6348247] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:47:30.577544   57519 system_pods.go:61] "kube-controller-manager-no-preload-814599" [871504b1-6e24-498b-9563-241b7a7422d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:47:30.577550   57519 system_pods.go:61] "kube-proxy-mqg49" [0f3b3b30-9789-4a0b-b636-e6db16e1ce0d] Running
	I0520 11:47:30.577560   57519 system_pods.go:61] "kube-scheduler-no-preload-814599" [028942b5-5187-4b86-84ba-eba2cdb7a844] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:47:30.577567   57519 system_pods.go:61] "metrics-server-569cc877fc-v2zh9" [1be844eb-66ba-4a82-86fb-6f53de7e4d33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:47:30.577577   57519 system_pods.go:61] "storage-provisioner" [84c706cd-0245-4afa-bb7b-4644bd2784bd] Running
	I0520 11:47:30.577586   57519 system_pods.go:74] duration metric: took 9.439264ms to wait for pod list to return data ...
	I0520 11:47:30.577598   57519 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:47:30.581346   57519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:47:30.581367   57519 node_conditions.go:123] node cpu capacity is 2
	I0520 11:47:30.581377   57519 node_conditions.go:105] duration metric: took 3.772205ms to run NodePressure ...
	I0520 11:47:30.581398   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:30.849703   57519 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:47:30.853895   57519 kubeadm.go:733] kubelet initialised
	I0520 11:47:30.853917   57519 kubeadm.go:734] duration metric: took 4.191303ms waiting for restarted kubelet to initialise ...
	I0520 11:47:30.853925   57519 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:47:30.861807   57519 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.866768   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.866797   57519 pod_ready.go:81] duration metric: took 4.964994ms for pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.866809   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.866819   57519 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.871001   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "etcd-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.871027   57519 pod_ready.go:81] duration metric: took 4.200278ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.871038   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "etcd-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.871046   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.875615   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "kube-apiserver-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.875635   57519 pod_ready.go:81] duration metric: took 4.580378ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.875645   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "kube-apiserver-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.875655   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.971186   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.971210   57519 pod_ready.go:81] duration metric: took 95.545422ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.971219   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.971226   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:29.250626   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:29.265658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:29.265716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:29.307618   58316 cri.go:89] found id: ""
	I0520 11:47:29.307640   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.307651   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:29.307658   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:29.307717   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:29.351325   58316 cri.go:89] found id: ""
	I0520 11:47:29.351353   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.351364   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:29.351372   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:29.351444   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:29.392415   58316 cri.go:89] found id: ""
	I0520 11:47:29.392439   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.392449   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:29.392455   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:29.392501   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:29.435195   58316 cri.go:89] found id: ""
	I0520 11:47:29.435220   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.435227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:29.435233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:29.435287   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:29.476746   58316 cri.go:89] found id: ""
	I0520 11:47:29.476773   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.476784   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:29.476790   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:29.476875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:29.511513   58316 cri.go:89] found id: ""
	I0520 11:47:29.511545   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.511553   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:29.511559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:29.511621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:29.547563   58316 cri.go:89] found id: ""
	I0520 11:47:29.547583   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.547590   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:29.547598   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:29.547644   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:29.586438   58316 cri.go:89] found id: ""
	I0520 11:47:29.586464   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.586472   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:29.586480   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:29.586491   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:29.660795   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:29.660828   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:29.699538   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:29.699568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:29.751675   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:29.751711   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:29.766924   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:29.766960   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:29.840738   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:27.555356   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:29.556470   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:29.862807   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:32.364158   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:31.372203   57519 pod_ready.go:92] pod "kube-proxy-mqg49" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:31.372223   57519 pod_ready.go:81] duration metric: took 400.985089ms for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:31.372231   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:33.379905   57519 pod_ready.go:102] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:35.378764   57519 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:35.378793   57519 pod_ready.go:81] duration metric: took 4.006551156s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:35.378804   57519 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:32.341347   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:32.355727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:32.355784   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:32.399703   58316 cri.go:89] found id: ""
	I0520 11:47:32.399740   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.399749   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:32.399754   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:32.399813   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:32.440321   58316 cri.go:89] found id: ""
	I0520 11:47:32.440350   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.440362   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:32.440367   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:32.440427   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:32.477813   58316 cri.go:89] found id: ""
	I0520 11:47:32.477840   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.477848   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:32.477853   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:32.477899   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:32.524465   58316 cri.go:89] found id: ""
	I0520 11:47:32.524492   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.524507   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:32.524516   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:32.524574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:32.559287   58316 cri.go:89] found id: ""
	I0520 11:47:32.559316   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.559326   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:32.559334   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:32.559390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:32.595977   58316 cri.go:89] found id: ""
	I0520 11:47:32.596010   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.596022   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:32.596030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:32.596088   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:32.631700   58316 cri.go:89] found id: ""
	I0520 11:47:32.631730   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.631739   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:32.631746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:32.631818   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:32.668027   58316 cri.go:89] found id: ""
	I0520 11:47:32.668062   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.668074   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:32.668084   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:32.668097   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:32.719910   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:32.719940   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:32.733988   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:32.734011   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:32.805852   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:32.805871   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:32.805886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:32.884785   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:32.884814   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:35.425234   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:35.438512   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:35.438583   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:35.471964   58316 cri.go:89] found id: ""
	I0520 11:47:35.471989   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.471997   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:35.472003   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:35.472058   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:35.505663   58316 cri.go:89] found id: ""
	I0520 11:47:35.505695   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.505706   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:35.505714   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:35.505766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:35.540407   58316 cri.go:89] found id: ""
	I0520 11:47:35.540437   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.540447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:35.540454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:35.540516   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:35.576599   58316 cri.go:89] found id: ""
	I0520 11:47:35.576622   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.576629   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:35.576634   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:35.576689   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:35.610257   58316 cri.go:89] found id: ""
	I0520 11:47:35.610286   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.610297   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:35.610304   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:35.610367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:35.647305   58316 cri.go:89] found id: ""
	I0520 11:47:35.647330   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.647338   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:35.647344   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:35.647393   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:35.682149   58316 cri.go:89] found id: ""
	I0520 11:47:35.682180   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.682191   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:35.682198   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:35.682260   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:35.717536   58316 cri.go:89] found id: ""
	I0520 11:47:35.717561   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.717569   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:35.717576   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:35.717588   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:35.768734   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:35.768763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:35.783246   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:35.783276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:35.869299   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:35.869325   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:35.869343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:35.948083   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:35.948116   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:32.056046   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:34.555606   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:36.556231   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:34.862916   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:37.363328   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:37.385861   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:39.884432   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:38.498413   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:38.512549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:38.512627   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:38.545580   58316 cri.go:89] found id: ""
	I0520 11:47:38.545601   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.545608   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:38.545614   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:38.545660   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.583115   58316 cri.go:89] found id: ""
	I0520 11:47:38.583145   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.583156   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:38.583163   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:38.583210   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:38.619548   58316 cri.go:89] found id: ""
	I0520 11:47:38.619578   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.619588   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:38.619596   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:38.619664   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:38.664240   58316 cri.go:89] found id: ""
	I0520 11:47:38.664272   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.664282   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:38.664290   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:38.664347   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:38.703438   58316 cri.go:89] found id: ""
	I0520 11:47:38.703460   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.703468   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:38.703472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:38.703546   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:38.740712   58316 cri.go:89] found id: ""
	I0520 11:47:38.740742   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.740754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:38.740761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:38.740821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:38.785274   58316 cri.go:89] found id: ""
	I0520 11:47:38.785296   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.785305   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:38.785312   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:38.785370   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:38.827757   58316 cri.go:89] found id: ""
	I0520 11:47:38.827787   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.827798   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:38.827810   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:38.827825   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:38.880074   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:38.880105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:38.894643   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:38.894688   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:38.982821   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:38.982849   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:38.982872   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:39.066426   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:39.066457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:41.610439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:41.623306   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:41.623374   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:41.662703   58316 cri.go:89] found id: ""
	I0520 11:47:41.662731   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.662740   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:41.662747   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:41.662803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.557813   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.054616   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:39.863318   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:42.362965   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.893841   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:44.387666   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.699406   58316 cri.go:89] found id: ""
	I0520 11:47:41.699433   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.699443   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:41.699449   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:41.699493   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:41.733448   58316 cri.go:89] found id: ""
	I0520 11:47:41.733475   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.733485   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:41.733492   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:41.733556   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:41.782101   58316 cri.go:89] found id: ""
	I0520 11:47:41.782131   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.782142   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:41.782149   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:41.782213   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:41.828303   58316 cri.go:89] found id: ""
	I0520 11:47:41.828332   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.828343   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:41.828350   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:41.828438   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:41.865690   58316 cri.go:89] found id: ""
	I0520 11:47:41.865712   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.865721   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:41.865726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:41.865773   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:41.907954   58316 cri.go:89] found id: ""
	I0520 11:47:41.907982   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.907992   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:41.907999   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:41.908060   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:41.946853   58316 cri.go:89] found id: ""
	I0520 11:47:41.946879   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.946889   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:41.946901   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:41.946916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:42.001481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:42.001516   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:42.015383   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:42.015412   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:42.088155   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:42.088184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:42.088201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:42.172771   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:42.172801   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:44.714665   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:44.727782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:44.727866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:44.760377   58316 cri.go:89] found id: ""
	I0520 11:47:44.760405   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.760412   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:44.760418   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:44.760466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:44.793488   58316 cri.go:89] found id: ""
	I0520 11:47:44.793509   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.793517   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:44.793522   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:44.793570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:44.827752   58316 cri.go:89] found id: ""
	I0520 11:47:44.827777   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.827786   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:44.827792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:44.827840   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:44.864415   58316 cri.go:89] found id: ""
	I0520 11:47:44.864437   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.864446   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:44.864454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:44.864514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:44.904360   58316 cri.go:89] found id: ""
	I0520 11:47:44.904382   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.904391   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:44.904396   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:44.904452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:44.939969   58316 cri.go:89] found id: ""
	I0520 11:47:44.940000   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.940010   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:44.940019   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:44.940069   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:44.979521   58316 cri.go:89] found id: ""
	I0520 11:47:44.979544   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.979552   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:44.979557   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:44.979621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:45.012946   58316 cri.go:89] found id: ""
	I0520 11:47:45.012979   58316 logs.go:276] 0 containers: []
	W0520 11:47:45.012992   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:45.013002   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:45.013015   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:45.067582   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:45.067619   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:45.081497   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:45.081529   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:45.155312   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:45.155335   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:45.155351   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:45.232403   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:45.232439   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:43.055052   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:45.056688   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:44.863217   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:47.362713   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:46.886365   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.384568   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:47.774002   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:47.789020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:47.789078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:47.826284   58316 cri.go:89] found id: ""
	I0520 11:47:47.826319   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.826331   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:47.826340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:47.826404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:47.866588   58316 cri.go:89] found id: ""
	I0520 11:47:47.866610   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.866616   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:47.866621   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:47.866675   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:47.901556   58316 cri.go:89] found id: ""
	I0520 11:47:47.901582   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.901593   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:47.901600   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:47.901658   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:47.937563   58316 cri.go:89] found id: ""
	I0520 11:47:47.937586   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.937593   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:47.937599   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:47.937656   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:47.975493   58316 cri.go:89] found id: ""
	I0520 11:47:47.975517   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.975527   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:47.975535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:47.975594   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:48.012208   58316 cri.go:89] found id: ""
	I0520 11:47:48.012232   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.012239   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:48.012245   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:48.012303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:48.046951   58316 cri.go:89] found id: ""
	I0520 11:47:48.046979   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.046989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:48.046996   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:48.047057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:48.085611   58316 cri.go:89] found id: ""
	I0520 11:47:48.085639   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.085649   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:48.085659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:48.085673   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:48.159338   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:48.159359   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:48.159373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:48.238332   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:48.238362   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:48.282316   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:48.282346   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:48.333780   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:48.333811   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:50.849007   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:50.863710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:50.863766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:50.900260   58316 cri.go:89] found id: ""
	I0520 11:47:50.900281   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.900287   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:50.900294   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:50.900342   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:50.934796   58316 cri.go:89] found id: ""
	I0520 11:47:50.934817   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.934824   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:50.934830   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:50.934875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:50.972957   58316 cri.go:89] found id: ""
	I0520 11:47:50.972983   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.972992   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:50.973000   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:50.973073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:51.009354   58316 cri.go:89] found id: ""
	I0520 11:47:51.009392   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.009402   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:51.009410   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:51.009467   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:51.046268   58316 cri.go:89] found id: ""
	I0520 11:47:51.046297   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.046308   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:51.046316   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:51.046379   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:51.083513   58316 cri.go:89] found id: ""
	I0520 11:47:51.083538   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.083548   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:51.083554   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:51.083619   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:51.126736   58316 cri.go:89] found id: ""
	I0520 11:47:51.126761   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.126778   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:51.126784   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:51.126841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:51.166149   58316 cri.go:89] found id: ""
	I0520 11:47:51.166176   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.166187   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:51.166196   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:51.166207   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:51.218946   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:51.218989   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:51.233779   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:51.233813   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:51.312266   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:51.312289   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:51.312306   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:51.393884   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:51.393911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:47.056743   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.555489   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.363267   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:51.863214   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:51.385469   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:53.884960   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:55.885430   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:53.939605   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:53.952518   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:53.952585   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:53.995262   58316 cri.go:89] found id: ""
	I0520 11:47:53.995291   58316 logs.go:276] 0 containers: []
	W0520 11:47:53.995302   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:53.995309   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:53.995372   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:54.029357   58316 cri.go:89] found id: ""
	I0520 11:47:54.029379   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.029387   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:54.029393   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:54.029441   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:54.064902   58316 cri.go:89] found id: ""
	I0520 11:47:54.064937   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.064947   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:54.064954   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:54.065014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:54.102763   58316 cri.go:89] found id: ""
	I0520 11:47:54.102788   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.102795   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:54.102801   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:54.102866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:54.136345   58316 cri.go:89] found id: ""
	I0520 11:47:54.136373   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.136384   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:54.136392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:54.136456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:54.173106   58316 cri.go:89] found id: ""
	I0520 11:47:54.173135   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.173144   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:54.173151   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:54.173214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:54.209199   58316 cri.go:89] found id: ""
	I0520 11:47:54.209222   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.209229   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:54.209235   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:54.209301   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:54.247681   58316 cri.go:89] found id: ""
	I0520 11:47:54.247711   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.247722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:54.247731   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:54.247749   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:54.325604   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:54.325623   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:54.325635   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:54.405791   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:54.405832   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:54.445633   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:54.445671   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:54.495395   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:54.495424   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:52.055220   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:54.059922   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:56.554742   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:54.362355   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:56.862720   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:57.886732   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:00.387613   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:57.011083   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:57.025109   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:57.025171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:57.059268   58316 cri.go:89] found id: ""
	I0520 11:47:57.059299   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.059310   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:57.059318   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:57.059371   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:57.093744   58316 cri.go:89] found id: ""
	I0520 11:47:57.093770   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.093780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:57.093788   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:57.093854   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:57.128836   58316 cri.go:89] found id: ""
	I0520 11:47:57.128868   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.128879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:57.128886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:57.128967   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:57.163858   58316 cri.go:89] found id: ""
	I0520 11:47:57.163882   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.163890   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:57.163896   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:57.163951   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:57.198684   58316 cri.go:89] found id: ""
	I0520 11:47:57.198713   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.198724   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:57.198732   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:57.198794   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:57.235121   58316 cri.go:89] found id: ""
	I0520 11:47:57.235147   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.235155   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:57.235161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:57.235208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:57.279621   58316 cri.go:89] found id: ""
	I0520 11:47:57.279641   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.279650   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:57.279655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:57.279701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:57.316292   58316 cri.go:89] found id: ""
	I0520 11:47:57.316319   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.316329   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:57.316338   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:57.316350   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:57.330013   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:57.330037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:57.405736   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:57.405762   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:57.405776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:57.483126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:57.483160   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:57.527345   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:57.527375   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.081615   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:00.094782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:00.094841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:00.133435   58316 cri.go:89] found id: ""
	I0520 11:48:00.133470   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.133486   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:00.133492   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:00.133553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:00.169402   58316 cri.go:89] found id: ""
	I0520 11:48:00.169434   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.169445   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:00.169453   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:00.169515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:00.204702   58316 cri.go:89] found id: ""
	I0520 11:48:00.204728   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.204738   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:00.204745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:00.204803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:00.246341   58316 cri.go:89] found id: ""
	I0520 11:48:00.246373   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.246383   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:00.246391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:00.246452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:00.279937   58316 cri.go:89] found id: ""
	I0520 11:48:00.279961   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.279968   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:00.279973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:00.280021   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:00.314825   58316 cri.go:89] found id: ""
	I0520 11:48:00.314853   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.314864   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:00.314870   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:00.314937   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:00.354431   58316 cri.go:89] found id: ""
	I0520 11:48:00.354460   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.354471   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:00.354478   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:00.354538   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:00.391309   58316 cri.go:89] found id: ""
	I0520 11:48:00.391335   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.391342   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:00.391350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:00.391361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:00.471502   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:00.471535   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:00.508324   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:00.508352   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.559695   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:00.559728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:00.573677   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:00.573709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:00.648102   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:58.558508   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:01.055219   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:58.862939   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:01.362135   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:02.886809   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:04.887274   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:03.148336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:03.162792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:03.162922   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:03.202620   58316 cri.go:89] found id: ""
	I0520 11:48:03.202656   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.202667   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:03.202674   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:03.202732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:03.243434   58316 cri.go:89] found id: ""
	I0520 11:48:03.243460   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.243468   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:03.243474   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:03.243528   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:03.278517   58316 cri.go:89] found id: ""
	I0520 11:48:03.278545   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.278553   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:03.278560   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:03.278624   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:03.317253   58316 cri.go:89] found id: ""
	I0520 11:48:03.317281   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.317291   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:03.317298   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:03.317360   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:03.352394   58316 cri.go:89] found id: ""
	I0520 11:48:03.352416   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.352425   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:03.352431   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:03.352480   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:03.393016   58316 cri.go:89] found id: ""
	I0520 11:48:03.393044   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.393055   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:03.393063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:03.393134   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:03.434436   58316 cri.go:89] found id: ""
	I0520 11:48:03.434461   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.434468   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:03.434474   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:03.434531   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:03.476703   58316 cri.go:89] found id: ""
	I0520 11:48:03.476728   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.476737   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:03.476746   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:03.476761   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:03.529959   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:03.529996   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:03.544741   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:03.544770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:03.614526   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:03.614552   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:03.614568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:03.695336   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:03.695368   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:06.233916   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:06.249026   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:06.249083   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:06.288191   58316 cri.go:89] found id: ""
	I0520 11:48:06.288223   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.288233   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:06.288246   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:06.288307   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:06.328602   58316 cri.go:89] found id: ""
	I0520 11:48:06.328639   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.328647   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:06.328653   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:06.328710   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:06.373030   58316 cri.go:89] found id: ""
	I0520 11:48:06.373053   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.373062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:06.373069   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:06.373130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:06.410520   58316 cri.go:89] found id: ""
	I0520 11:48:06.410542   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.410551   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:06.410559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:06.410625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:06.448468   58316 cri.go:89] found id: ""
	I0520 11:48:06.448493   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.448505   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:06.448511   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:06.448571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:06.507441   58316 cri.go:89] found id: ""
	I0520 11:48:06.507468   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.507477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:06.507485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:06.507545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:06.547174   58316 cri.go:89] found id: ""
	I0520 11:48:06.547201   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.547212   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:06.547220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:06.547283   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:06.580692   58316 cri.go:89] found id: ""
	I0520 11:48:06.580715   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.580722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:06.580730   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:06.580741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:06.634822   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:06.634855   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:06.649107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:06.649132   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:48:03.555849   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:06.059026   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:03.363379   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:05.863603   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:07.386458   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:09.884793   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	W0520 11:48:06.717523   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:06.717548   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:06.717562   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:06.806538   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:06.806571   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:09.347195   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:09.362597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:09.362677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:09.401314   58316 cri.go:89] found id: ""
	I0520 11:48:09.401338   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.401346   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:09.401353   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:09.401411   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:09.436394   58316 cri.go:89] found id: ""
	I0520 11:48:09.436418   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.436426   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:09.436431   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:09.436484   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:09.470407   58316 cri.go:89] found id: ""
	I0520 11:48:09.470437   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.470447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:09.470462   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:09.470514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:09.507720   58316 cri.go:89] found id: ""
	I0520 11:48:09.507747   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.507758   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:09.507763   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:09.507824   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:09.548056   58316 cri.go:89] found id: ""
	I0520 11:48:09.548084   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.548093   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:09.548106   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:09.548172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:09.585187   58316 cri.go:89] found id: ""
	I0520 11:48:09.585214   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.585222   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:09.585227   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:09.585285   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:09.622695   58316 cri.go:89] found id: ""
	I0520 11:48:09.622718   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.622725   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:09.622731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:09.622796   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:09.662751   58316 cri.go:89] found id: ""
	I0520 11:48:09.662775   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.662784   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:09.662792   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:09.662804   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:09.715669   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:09.715709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:09.729301   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:09.729333   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:09.799189   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:09.799213   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:09.799228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:09.885889   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:09.885916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:08.554935   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:10.555580   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:08.362397   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:10.863960   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:12.385193   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:14.385749   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:12.431556   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:12.445470   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:12.445545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:12.482161   58316 cri.go:89] found id: ""
	I0520 11:48:12.482186   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.482195   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:12.482201   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:12.482251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:12.520051   58316 cri.go:89] found id: ""
	I0520 11:48:12.520077   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.520090   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:12.520096   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:12.520143   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:12.559856   58316 cri.go:89] found id: ""
	I0520 11:48:12.559878   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.559886   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:12.559891   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:12.559947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:12.599657   58316 cri.go:89] found id: ""
	I0520 11:48:12.599679   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.599687   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:12.599693   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:12.599739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:12.640514   58316 cri.go:89] found id: ""
	I0520 11:48:12.640539   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.640547   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:12.640552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:12.640601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:12.680337   58316 cri.go:89] found id: ""
	I0520 11:48:12.680367   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.680378   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:12.680384   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:12.680433   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:12.718580   58316 cri.go:89] found id: ""
	I0520 11:48:12.718615   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.718624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:12.718629   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:12.718677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:12.760438   58316 cri.go:89] found id: ""
	I0520 11:48:12.760463   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.760471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:12.760479   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:12.760493   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:12.816564   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:12.816599   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:12.831107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:12.831130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:12.931165   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:12.931184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:12.931199   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:13.035436   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:13.035478   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:15.578999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:15.593185   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:15.593280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:15.629644   58316 cri.go:89] found id: ""
	I0520 11:48:15.629672   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.629682   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:15.629689   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:15.629749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:15.668639   58316 cri.go:89] found id: ""
	I0520 11:48:15.668670   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.668680   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:15.668688   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:15.668752   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:15.712844   58316 cri.go:89] found id: ""
	I0520 11:48:15.712868   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.712876   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:15.712881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:15.712954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:15.748969   58316 cri.go:89] found id: ""
	I0520 11:48:15.748998   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.749008   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:15.749020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:15.749085   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:15.784625   58316 cri.go:89] found id: ""
	I0520 11:48:15.784658   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.784666   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:15.784672   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:15.784724   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:15.825288   58316 cri.go:89] found id: ""
	I0520 11:48:15.825319   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.825326   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:15.825332   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:15.825390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:15.861589   58316 cri.go:89] found id: ""
	I0520 11:48:15.861617   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.861628   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:15.861635   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:15.861691   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:15.897115   58316 cri.go:89] found id: ""
	I0520 11:48:15.897143   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.897155   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:15.897166   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:15.897188   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:15.971286   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:15.971307   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:15.971323   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:16.049567   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:16.049612   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:16.090116   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:16.090145   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:16.140151   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:16.140187   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:13.058312   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:15.556249   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:13.363188   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:15.363224   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:17.861944   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:16.884628   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:18.885335   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:20.887306   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:18.655048   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:18.671663   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:18.671720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:18.721288   58316 cri.go:89] found id: ""
	I0520 11:48:18.721315   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.721333   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:18.721340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:18.721407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:18.765025   58316 cri.go:89] found id: ""
	I0520 11:48:18.765054   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.765064   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:18.765069   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:18.765135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:18.825054   58316 cri.go:89] found id: ""
	I0520 11:48:18.825083   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.825094   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:18.825110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:18.825172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:18.861528   58316 cri.go:89] found id: ""
	I0520 11:48:18.861550   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.861560   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:18.861566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:18.861630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:18.899943   58316 cri.go:89] found id: ""
	I0520 11:48:18.899970   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.899980   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:18.899986   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:18.900063   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:18.936465   58316 cri.go:89] found id: ""
	I0520 11:48:18.936497   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.936507   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:18.936514   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:18.936570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:18.970599   58316 cri.go:89] found id: ""
	I0520 11:48:18.970636   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.970645   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:18.970653   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:18.970718   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:19.004507   58316 cri.go:89] found id: ""
	I0520 11:48:19.004538   58316 logs.go:276] 0 containers: []
	W0520 11:48:19.004547   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:19.004558   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:19.004573   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:19.057482   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:19.057521   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:19.072189   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:19.072211   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:19.148331   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:19.148356   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:19.148373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:19.228421   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:19.228456   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:18.055244   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:20.056597   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:19.862408   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:21.862760   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:23.386334   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:25.388116   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:21.771146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:21.786547   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:21.786620   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:21.829816   58316 cri.go:89] found id: ""
	I0520 11:48:21.829846   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.829858   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:21.829865   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:21.829925   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:21.867480   58316 cri.go:89] found id: ""
	I0520 11:48:21.867502   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.867510   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:21.867515   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:21.867571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:21.904737   58316 cri.go:89] found id: ""
	I0520 11:48:21.904760   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.904767   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:21.904772   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:21.904841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:21.940431   58316 cri.go:89] found id: ""
	I0520 11:48:21.940457   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.940465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:21.940482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:21.940543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:21.976054   58316 cri.go:89] found id: ""
	I0520 11:48:21.976087   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.976094   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:21.976100   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:21.976158   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:22.020035   58316 cri.go:89] found id: ""
	I0520 11:48:22.020104   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.020116   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:22.020123   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:22.020189   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:22.060981   58316 cri.go:89] found id: ""
	I0520 11:48:22.061002   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.061010   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:22.061015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:22.061059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:22.102854   58316 cri.go:89] found id: ""
	I0520 11:48:22.102875   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.102882   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:22.102892   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:22.102906   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:22.158901   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:22.158933   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.172825   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:22.172850   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:22.246602   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:22.246625   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:22.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:22.325862   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:22.325899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:24.868449   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:24.882851   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:24.882921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:24.922574   58316 cri.go:89] found id: ""
	I0520 11:48:24.922600   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.922610   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:24.922617   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:24.922677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:24.958786   58316 cri.go:89] found id: ""
	I0520 11:48:24.958817   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.958845   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:24.958853   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:24.958906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:24.993346   58316 cri.go:89] found id: ""
	I0520 11:48:24.993375   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.993384   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:24.993391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:24.993450   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:25.026840   58316 cri.go:89] found id: ""
	I0520 11:48:25.026863   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.026870   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:25.026875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:25.026920   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:25.065675   58316 cri.go:89] found id: ""
	I0520 11:48:25.065710   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.065719   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:25.065727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:25.065791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:25.100955   58316 cri.go:89] found id: ""
	I0520 11:48:25.100986   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.100997   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:25.101004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:25.101062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:25.134516   58316 cri.go:89] found id: ""
	I0520 11:48:25.134544   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.134554   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:25.134562   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:25.134626   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:25.174455   58316 cri.go:89] found id: ""
	I0520 11:48:25.174482   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.174491   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:25.174501   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:25.174527   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:25.245557   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:25.245579   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:25.245592   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:25.324197   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:25.324228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:25.362963   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:25.362995   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:25.420778   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:25.420807   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.555469   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:24.556030   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:24.362744   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:26.865098   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:27.884029   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.885238   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:27.935757   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:27.949787   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:27.949873   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:27.985142   58316 cri.go:89] found id: ""
	I0520 11:48:27.985174   58316 logs.go:276] 0 containers: []
	W0520 11:48:27.985185   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:27.985191   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:27.985251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:28.020044   58316 cri.go:89] found id: ""
	I0520 11:48:28.020074   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.020082   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:28.020088   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:28.020135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:28.057075   58316 cri.go:89] found id: ""
	I0520 11:48:28.057114   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.057125   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:28.057136   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:28.057182   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:28.093505   58316 cri.go:89] found id: ""
	I0520 11:48:28.093527   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.093534   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:28.093540   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:28.093596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:28.129402   58316 cri.go:89] found id: ""
	I0520 11:48:28.129431   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.129440   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:28.129447   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:28.129505   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:28.166558   58316 cri.go:89] found id: ""
	I0520 11:48:28.166582   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.166590   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:28.166601   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:28.166649   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:28.206833   58316 cri.go:89] found id: ""
	I0520 11:48:28.206863   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.206873   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:28.206881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:28.206943   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:28.246571   58316 cri.go:89] found id: ""
	I0520 11:48:28.246605   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.246617   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:28.246628   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:28.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:28.297849   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:28.297882   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:28.311578   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:28.311603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:28.387379   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:28.387399   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:28.387410   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:28.466506   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:28.466540   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.007382   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:31.021671   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:31.021751   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:31.059965   58316 cri.go:89] found id: ""
	I0520 11:48:31.059987   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.059996   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:31.060004   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:31.060062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:31.095664   58316 cri.go:89] found id: ""
	I0520 11:48:31.095686   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.095693   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:31.095699   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:31.095749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:31.131510   58316 cri.go:89] found id: ""
	I0520 11:48:31.131531   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.131538   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:31.131544   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:31.131596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:31.166962   58316 cri.go:89] found id: ""
	I0520 11:48:31.167000   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.167011   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:31.167018   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:31.167082   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:31.202831   58316 cri.go:89] found id: ""
	I0520 11:48:31.202856   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.202863   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:31.202868   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:31.202927   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:31.239365   58316 cri.go:89] found id: ""
	I0520 11:48:31.239397   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.239409   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:31.239417   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:31.239477   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:31.276613   58316 cri.go:89] found id: ""
	I0520 11:48:31.276641   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.276651   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:31.276658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:31.276720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:31.312008   58316 cri.go:89] found id: ""
	I0520 11:48:31.312040   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.312047   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:31.312056   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:31.312066   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:31.394355   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:31.394383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.439251   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:31.439277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:31.494313   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:31.494341   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:31.508675   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:31.508698   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:31.576022   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:27.054447   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.056075   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.556385   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.368181   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.862789   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.885337   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:33.885831   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:34.076333   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:34.089640   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:34.089704   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:34.124848   58316 cri.go:89] found id: ""
	I0520 11:48:34.124869   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.124876   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:34.124881   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:34.124948   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:34.159885   58316 cri.go:89] found id: ""
	I0520 11:48:34.159911   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.159920   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:34.159925   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:34.159988   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:34.194029   58316 cri.go:89] found id: ""
	I0520 11:48:34.194051   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.194058   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:34.194063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:34.194135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:34.231412   58316 cri.go:89] found id: ""
	I0520 11:48:34.231435   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.231442   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:34.231448   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:34.231497   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:34.268679   58316 cri.go:89] found id: ""
	I0520 11:48:34.268709   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.268721   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:34.268729   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:34.268795   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:34.313585   58316 cri.go:89] found id: ""
	I0520 11:48:34.313607   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.313613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:34.313618   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:34.313681   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:34.348685   58316 cri.go:89] found id: ""
	I0520 11:48:34.348712   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.348723   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:34.348730   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:34.348788   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:34.393438   58316 cri.go:89] found id: ""
	I0520 11:48:34.393462   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.393471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:34.393480   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:34.393495   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:34.468828   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:34.468859   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:34.468877   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:34.553796   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:34.553827   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:34.594968   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:34.594992   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:34.647814   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:34.647847   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:34.055168   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.056035   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:34.362591   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.863645   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.385081   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:38.884785   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:40.886065   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:37.161573   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:37.176133   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:37.176214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:37.214694   58316 cri.go:89] found id: ""
	I0520 11:48:37.214719   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.214729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:37.214736   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:37.214803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:37.259745   58316 cri.go:89] found id: ""
	I0520 11:48:37.259772   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.259783   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:37.259790   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:37.259852   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:37.299887   58316 cri.go:89] found id: ""
	I0520 11:48:37.299920   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.299931   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:37.299938   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:37.300003   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:37.337428   58316 cri.go:89] found id: ""
	I0520 11:48:37.337456   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.337465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:37.337472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:37.337532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:37.379635   58316 cri.go:89] found id: ""
	I0520 11:48:37.379662   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.379672   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:37.379680   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:37.379739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:37.421071   58316 cri.go:89] found id: ""
	I0520 11:48:37.421104   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.421112   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:37.421118   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:37.421164   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:37.460130   58316 cri.go:89] found id: ""
	I0520 11:48:37.460154   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.460162   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:37.460168   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:37.460225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:37.497028   58316 cri.go:89] found id: ""
	I0520 11:48:37.497053   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.497063   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:37.497074   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:37.497089   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:37.550632   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:37.550659   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:37.566172   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:37.566201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:37.645132   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:37.645150   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:37.645163   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:37.720923   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:37.720966   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:40.260330   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:40.275233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:40.275294   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:40.310450   58316 cri.go:89] found id: ""
	I0520 11:48:40.310479   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.310488   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:40.310494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:40.310543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:40.346171   58316 cri.go:89] found id: ""
	I0520 11:48:40.346200   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.346211   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:40.346217   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:40.346278   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:40.382644   58316 cri.go:89] found id: ""
	I0520 11:48:40.382666   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.382676   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:40.382684   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:40.382739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:40.422950   58316 cri.go:89] found id: ""
	I0520 11:48:40.422974   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.422984   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:40.422990   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:40.423050   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:40.458818   58316 cri.go:89] found id: ""
	I0520 11:48:40.458864   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.458876   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:40.458884   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:40.458935   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:40.496438   58316 cri.go:89] found id: ""
	I0520 11:48:40.496462   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.496471   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:40.496476   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:40.496530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:40.532432   58316 cri.go:89] found id: ""
	I0520 11:48:40.532457   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.532465   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:40.532471   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:40.532530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:40.572628   58316 cri.go:89] found id: ""
	I0520 11:48:40.572652   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.572660   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:40.572667   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:40.572678   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:40.623158   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:40.623189   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:40.636609   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:40.636633   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:40.710836   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:40.710866   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:40.710880   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:40.795386   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:40.795420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:38.556294   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:40.556701   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:39.362479   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:41.861916   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.387356   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:45.884457   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.334999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:43.348481   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:43.348553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:43.385528   58316 cri.go:89] found id: ""
	I0520 11:48:43.385554   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.385564   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:43.385571   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:43.385630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:43.424628   58316 cri.go:89] found id: ""
	I0520 11:48:43.424659   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.424676   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:43.424685   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:43.424746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:43.464043   58316 cri.go:89] found id: ""
	I0520 11:48:43.464075   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.464091   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:43.464097   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:43.464171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:43.501860   58316 cri.go:89] found id: ""
	I0520 11:48:43.501889   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.501899   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:43.501907   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:43.501970   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:43.541881   58316 cri.go:89] found id: ""
	I0520 11:48:43.541910   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.541919   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:43.541926   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:43.541992   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:43.578759   58316 cri.go:89] found id: ""
	I0520 11:48:43.578782   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.578789   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:43.578794   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:43.578842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:43.618012   58316 cri.go:89] found id: ""
	I0520 11:48:43.618042   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.618050   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:43.618058   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:43.618117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.654219   58316 cri.go:89] found id: ""
	I0520 11:48:43.654243   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.654250   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:43.654258   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:43.654273   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:43.667782   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:43.667805   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:43.740111   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:43.740138   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:43.740155   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:43.823936   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:43.823974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:43.865045   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:43.865070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.421300   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:46.434802   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:46.434878   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:46.469361   58316 cri.go:89] found id: ""
	I0520 11:48:46.469392   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.469401   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:46.469407   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:46.469460   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:46.503922   58316 cri.go:89] found id: ""
	I0520 11:48:46.503950   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.503960   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:46.503968   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:46.504025   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:46.542916   58316 cri.go:89] found id: ""
	I0520 11:48:46.542942   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.542950   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:46.542955   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:46.543011   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:46.581532   58316 cri.go:89] found id: ""
	I0520 11:48:46.581550   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.581558   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:46.581563   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:46.581623   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:46.618650   58316 cri.go:89] found id: ""
	I0520 11:48:46.618677   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.618684   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:46.618689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:46.618740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:46.655409   58316 cri.go:89] found id: ""
	I0520 11:48:46.655431   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.655439   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:46.655445   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:46.655491   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:46.690494   58316 cri.go:89] found id: ""
	I0520 11:48:46.690519   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.690526   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:46.690531   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:46.690579   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.056625   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:45.555575   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.864183   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:46.362155   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:47.885261   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:49.886656   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:46.724684   58316 cri.go:89] found id: ""
	I0520 11:48:46.724715   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.724725   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:46.724737   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:46.724752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:46.807899   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:46.807931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:46.850852   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:46.850883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.903474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:46.903502   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:46.917073   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:46.917105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:47.006606   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.507632   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:49.521080   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:49.521160   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:49.558639   58316 cri.go:89] found id: ""
	I0520 11:48:49.558671   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.558678   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:49.558684   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:49.558732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:49.595470   58316 cri.go:89] found id: ""
	I0520 11:48:49.595497   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.595506   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:49.595513   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:49.595573   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:49.630164   58316 cri.go:89] found id: ""
	I0520 11:48:49.630193   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.630204   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:49.630211   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:49.630272   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:49.664835   58316 cri.go:89] found id: ""
	I0520 11:48:49.664863   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.664873   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:49.664882   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:49.664954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:49.700428   58316 cri.go:89] found id: ""
	I0520 11:48:49.700483   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.700497   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:49.700504   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:49.700569   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:49.737184   58316 cri.go:89] found id: ""
	I0520 11:48:49.737216   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.737226   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:49.737234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:49.737303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:49.772577   58316 cri.go:89] found id: ""
	I0520 11:48:49.772601   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.772611   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:49.772617   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:49.772678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:49.806926   58316 cri.go:89] found id: ""
	I0520 11:48:49.806954   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.806966   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:49.806977   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:49.806991   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:49.861208   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:49.861242   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:49.875054   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:49.875084   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:49.949556   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.949582   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:49.949596   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:50.032181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:50.032226   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:48.055288   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:50.057220   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:48.362952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:50.363988   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.863467   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.385482   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:54.386207   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.578486   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:52.592465   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:52.592534   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:52.627795   58316 cri.go:89] found id: ""
	I0520 11:48:52.627828   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.627838   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:52.627845   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:52.627906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:52.663863   58316 cri.go:89] found id: ""
	I0520 11:48:52.663889   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.663896   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:52.663902   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:52.663956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:52.697912   58316 cri.go:89] found id: ""
	I0520 11:48:52.697937   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.697944   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:52.697950   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:52.698010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:52.751379   58316 cri.go:89] found id: ""
	I0520 11:48:52.751401   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.751408   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:52.751413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:52.751473   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:52.786683   58316 cri.go:89] found id: ""
	I0520 11:48:52.786709   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.786716   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:52.786721   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:52.786782   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:52.821720   58316 cri.go:89] found id: ""
	I0520 11:48:52.821747   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.821758   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:52.821765   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:52.821828   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:52.858569   58316 cri.go:89] found id: ""
	I0520 11:48:52.858596   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.858607   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:52.858615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:52.858671   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:52.897524   58316 cri.go:89] found id: ""
	I0520 11:48:52.897551   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.897560   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:52.897569   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:52.897584   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:52.977792   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:52.977816   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:52.977831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:53.064964   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:53.065009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:53.108613   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:53.108649   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:53.164692   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:53.164728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:55.679593   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:55.693505   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:55.693576   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:55.730013   58316 cri.go:89] found id: ""
	I0520 11:48:55.730043   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.730053   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:55.730061   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:55.730121   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:55.769739   58316 cri.go:89] found id: ""
	I0520 11:48:55.769769   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.769780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:55.769787   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:55.769842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:55.809831   58316 cri.go:89] found id: ""
	I0520 11:48:55.809867   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.809878   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:55.809886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:55.809946   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:55.843487   58316 cri.go:89] found id: ""
	I0520 11:48:55.843513   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.843521   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:55.843527   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:55.843586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:55.883327   58316 cri.go:89] found id: ""
	I0520 11:48:55.883352   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.883362   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:55.883372   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:55.883430   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:55.921507   58316 cri.go:89] found id: ""
	I0520 11:48:55.921534   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.921544   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:55.921552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:55.921612   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:55.957042   58316 cri.go:89] found id: ""
	I0520 11:48:55.957066   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.957073   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:55.957079   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:55.957141   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:55.990871   58316 cri.go:89] found id: ""
	I0520 11:48:55.990899   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.990907   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:55.990916   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:55.990931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:56.041345   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:56.041373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:56.055628   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:56.055653   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:56.128845   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:56.128867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:56.128883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:56.218181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:56.218215   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:52.554824   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:54.557133   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:55.362033   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:57.362624   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:56.885083   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:58.885662   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:58.760650   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:58.774991   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:58.775048   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:58.817777   58316 cri.go:89] found id: ""
	I0520 11:48:58.817807   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.817819   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:58.817826   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:58.817885   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:58.855909   58316 cri.go:89] found id: ""
	I0520 11:48:58.855934   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.855942   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:58.855949   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:58.856010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:58.893369   58316 cri.go:89] found id: ""
	I0520 11:48:58.893396   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.893405   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:58.893413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:58.893475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:58.929975   58316 cri.go:89] found id: ""
	I0520 11:48:58.930001   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.930010   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:58.930017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:58.930067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:58.963865   58316 cri.go:89] found id: ""
	I0520 11:48:58.963890   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.963898   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:58.963904   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:58.963954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:59.000604   58316 cri.go:89] found id: ""
	I0520 11:48:59.000624   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.000631   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:59.000637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:59.000685   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:59.037408   58316 cri.go:89] found id: ""
	I0520 11:48:59.037433   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.037442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:59.037452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:59.037512   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:59.073087   58316 cri.go:89] found id: ""
	I0520 11:48:59.073109   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.073118   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:59.073126   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:59.073142   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:59.124885   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:59.124916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:59.154020   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:59.154047   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:59.277149   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:59.277199   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:59.277216   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:59.363126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:59.363157   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:57.055207   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:59.055409   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.554795   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:59.362869   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.363107   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.385471   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:03.386313   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:05.386387   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.906903   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:01.923346   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:01.923407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:01.961612   58316 cri.go:89] found id: ""
	I0520 11:49:01.961635   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.961643   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:01.961648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:01.961702   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:01.997803   58316 cri.go:89] found id: ""
	I0520 11:49:01.997830   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.997839   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:01.997844   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:01.997894   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:02.032142   58316 cri.go:89] found id: ""
	I0520 11:49:02.032169   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.032177   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:02.032182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:02.032226   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:02.070641   58316 cri.go:89] found id: ""
	I0520 11:49:02.070664   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.070671   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:02.070676   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:02.070729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:02.107270   58316 cri.go:89] found id: ""
	I0520 11:49:02.107296   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.107302   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:02.107308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:02.107356   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:02.144676   58316 cri.go:89] found id: ""
	I0520 11:49:02.144702   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.144709   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:02.144717   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:02.144798   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:02.184382   58316 cri.go:89] found id: ""
	I0520 11:49:02.184403   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.184410   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:02.184416   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:02.184466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:02.223110   58316 cri.go:89] found id: ""
	I0520 11:49:02.223140   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.223152   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:02.223164   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:02.223181   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:02.236272   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:02.236305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:02.312046   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:02.312065   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:02.312076   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:02.397284   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:02.397313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:02.438720   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:02.438755   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:04.988565   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:05.005842   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:05.005906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:05.071468   58316 cri.go:89] found id: ""
	I0520 11:49:05.071496   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.071505   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:05.071515   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:05.071574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:05.125767   58316 cri.go:89] found id: ""
	I0520 11:49:05.125793   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.125801   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:05.125806   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:05.125868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:05.162651   58316 cri.go:89] found id: ""
	I0520 11:49:05.162676   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.162684   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:05.162689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:05.162746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:05.205140   58316 cri.go:89] found id: ""
	I0520 11:49:05.205168   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.205176   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:05.205182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:05.205245   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:05.245187   58316 cri.go:89] found id: ""
	I0520 11:49:05.245209   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.245216   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:05.245221   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:05.245280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:05.287698   58316 cri.go:89] found id: ""
	I0520 11:49:05.287728   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.287739   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:05.287746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:05.287809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:05.325607   58316 cri.go:89] found id: ""
	I0520 11:49:05.325637   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.325649   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:05.325655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:05.325716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:05.360662   58316 cri.go:89] found id: ""
	I0520 11:49:05.360690   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.360700   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:05.360711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:05.360725   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:05.402342   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:05.402376   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:05.454403   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:05.454436   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:05.468882   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:05.468912   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:05.546520   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:05.546546   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:05.546567   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:03.555624   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:06.055651   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:03.862582   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:05.862917   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:07.863213   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:07.886494   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.384808   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:08.128785   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:08.144963   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:08.145053   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:08.190121   58316 cri.go:89] found id: ""
	I0520 11:49:08.190154   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.190165   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:08.190174   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:08.190253   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:08.227310   58316 cri.go:89] found id: ""
	I0520 11:49:08.227334   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.227341   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:08.227346   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:08.227390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:08.262606   58316 cri.go:89] found id: ""
	I0520 11:49:08.262631   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.262639   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:08.262646   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:08.262706   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:08.297061   58316 cri.go:89] found id: ""
	I0520 11:49:08.297092   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.297103   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:08.297110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:08.297170   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:08.337356   58316 cri.go:89] found id: ""
	I0520 11:49:08.337394   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.337410   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:08.337418   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:08.337481   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:08.372576   58316 cri.go:89] found id: ""
	I0520 11:49:08.372603   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.372613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:08.372620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:08.372677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:08.409394   58316 cri.go:89] found id: ""
	I0520 11:49:08.409427   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.409442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:08.409449   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:08.409562   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:08.445119   58316 cri.go:89] found id: ""
	I0520 11:49:08.445145   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.445153   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:08.445161   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:08.445171   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:08.457873   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:08.457899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:08.530873   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:08.530902   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:08.530920   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.609655   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:08.609695   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:08.656568   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:08.656601   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.209318   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:11.223883   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:11.223958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:11.260006   58316 cri.go:89] found id: ""
	I0520 11:49:11.260047   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.260060   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:11.260068   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:11.260129   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:11.296348   58316 cri.go:89] found id: ""
	I0520 11:49:11.296375   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.296385   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:11.296392   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:11.296452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:11.336546   58316 cri.go:89] found id: ""
	I0520 11:49:11.336573   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.336583   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:11.336590   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:11.336652   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:11.373216   58316 cri.go:89] found id: ""
	I0520 11:49:11.373244   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.373254   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:11.373261   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:11.373320   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:11.411555   58316 cri.go:89] found id: ""
	I0520 11:49:11.411586   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.411596   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:11.411604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:11.411663   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:11.446390   58316 cri.go:89] found id: ""
	I0520 11:49:11.446419   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.446428   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:11.446433   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:11.446489   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:11.481584   58316 cri.go:89] found id: ""
	I0520 11:49:11.481613   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.481624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:11.481637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:11.481701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:11.518092   58316 cri.go:89] found id: ""
	I0520 11:49:11.518121   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.518131   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:11.518143   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:11.518158   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.569824   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:11.569859   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:11.583600   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:11.583620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:11.663761   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:11.663799   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:11.663817   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.055875   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.056882   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.362663   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:12.863246   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:12.388256   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:14.887822   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:11.743430   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:11.743462   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:14.289940   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:14.307574   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:14.307645   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:14.349602   58316 cri.go:89] found id: ""
	I0520 11:49:14.349628   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.349637   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:14.349644   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:14.349701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:14.391484   58316 cri.go:89] found id: ""
	I0520 11:49:14.391508   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.391516   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:14.391521   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:14.391581   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:14.437859   58316 cri.go:89] found id: ""
	I0520 11:49:14.437884   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.437894   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:14.437902   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:14.437965   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:14.474353   58316 cri.go:89] found id: ""
	I0520 11:49:14.474379   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.474387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:14.474392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:14.474439   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:14.517430   58316 cri.go:89] found id: ""
	I0520 11:49:14.517459   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.517466   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:14.517472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:14.517523   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:14.552166   58316 cri.go:89] found id: ""
	I0520 11:49:14.552196   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.552207   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:14.552220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:14.552280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:14.589295   58316 cri.go:89] found id: ""
	I0520 11:49:14.589322   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.589330   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:14.589336   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:14.589396   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:14.626221   58316 cri.go:89] found id: ""
	I0520 11:49:14.626244   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.626251   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:14.626259   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:14.626271   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:14.681867   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:14.681899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:14.695328   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:14.695355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:14.773077   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:14.773102   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:14.773118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:14.857344   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:14.857378   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:12.557989   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:15.056569   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:14.864153   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:17.362222   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:16.888363   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.385523   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:17.402259   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:17.415895   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:17.415956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:17.458403   58316 cri.go:89] found id: ""
	I0520 11:49:17.458432   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.458443   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:17.458450   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:17.458511   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:17.495603   58316 cri.go:89] found id: ""
	I0520 11:49:17.495633   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.495650   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:17.495657   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:17.495716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:17.535416   58316 cri.go:89] found id: ""
	I0520 11:49:17.535440   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.535448   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:17.535452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:17.535515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:17.574842   58316 cri.go:89] found id: ""
	I0520 11:49:17.574868   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.574875   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:17.574880   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:17.574933   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:17.608744   58316 cri.go:89] found id: ""
	I0520 11:49:17.608773   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.608783   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:17.608791   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:17.608864   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:17.645101   58316 cri.go:89] found id: ""
	I0520 11:49:17.645126   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.645134   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:17.645140   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:17.645199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:17.682620   58316 cri.go:89] found id: ""
	I0520 11:49:17.682651   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.682661   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:17.682669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:17.682725   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:17.718392   58316 cri.go:89] found id: ""
	I0520 11:49:17.718414   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.718422   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:17.718430   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:17.718441   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:17.772103   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:17.772138   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:17.786071   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:17.786102   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:17.859596   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:17.859617   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:17.859634   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:17.938723   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:17.938754   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:20.480539   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:20.493677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:20.493745   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:20.528213   58316 cri.go:89] found id: ""
	I0520 11:49:20.528241   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.528251   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:20.528258   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:20.528344   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:20.564821   58316 cri.go:89] found id: ""
	I0520 11:49:20.564844   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.564853   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:20.564860   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:20.564921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:20.599015   58316 cri.go:89] found id: ""
	I0520 11:49:20.599051   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.599063   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:20.599070   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:20.599125   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:20.637115   58316 cri.go:89] found id: ""
	I0520 11:49:20.637160   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.637169   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:20.637175   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:20.637221   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:20.672699   58316 cri.go:89] found id: ""
	I0520 11:49:20.672728   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.672738   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:20.672745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:20.672809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:20.709278   58316 cri.go:89] found id: ""
	I0520 11:49:20.709309   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.709321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:20.709327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:20.709394   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:20.743159   58316 cri.go:89] found id: ""
	I0520 11:49:20.743182   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.743188   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:20.743194   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:20.743239   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:20.780848   58316 cri.go:89] found id: ""
	I0520 11:49:20.780878   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.780890   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:20.780902   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:20.780917   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:20.834178   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:20.834206   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:20.848026   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:20.848049   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:20.926067   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:20.926088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:20.926103   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:21.003153   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:21.003184   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:17.056820   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.556351   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.362408   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:21.861822   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:21.386338   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.390122   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:25.885748   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.545830   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:23.560697   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:23.560781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:23.595691   58316 cri.go:89] found id: ""
	I0520 11:49:23.595719   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.595729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:23.595737   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:23.595805   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:23.634526   58316 cri.go:89] found id: ""
	I0520 11:49:23.634555   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.634565   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:23.634573   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:23.634631   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:23.670101   58316 cri.go:89] found id: ""
	I0520 11:49:23.670144   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.670155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:23.670169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:23.670228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:23.704352   58316 cri.go:89] found id: ""
	I0520 11:49:23.704376   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.704384   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:23.704389   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:23.704435   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:23.745633   58316 cri.go:89] found id: ""
	I0520 11:49:23.745657   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.745664   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:23.745669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:23.745729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:23.784631   58316 cri.go:89] found id: ""
	I0520 11:49:23.784659   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.784671   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:23.784678   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:23.784742   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:23.819884   58316 cri.go:89] found id: ""
	I0520 11:49:23.819911   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.819918   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:23.819924   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:23.819976   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:23.857396   58316 cri.go:89] found id: ""
	I0520 11:49:23.857427   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.857436   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:23.857444   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:23.857455   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:23.931405   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:23.931428   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:23.931445   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:24.004717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:24.004752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:24.047465   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:24.047499   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:24.098918   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:24.098950   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:26.613820   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:26.627313   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:26.627386   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:26.661418   58316 cri.go:89] found id: ""
	I0520 11:49:26.661450   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.661460   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:26.661467   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:26.661532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:22.057526   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:24.554983   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.864449   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:26.363217   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:28.384733   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:30.884703   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:26.697938   58316 cri.go:89] found id: ""
	I0520 11:49:26.699437   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.699450   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:26.699456   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:26.699506   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:26.735017   58316 cri.go:89] found id: ""
	I0520 11:49:26.735053   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.735062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:26.735066   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:26.735115   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:26.771318   58316 cri.go:89] found id: ""
	I0520 11:49:26.771345   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.771355   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:26.771360   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:26.771410   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:26.806365   58316 cri.go:89] found id: ""
	I0520 11:49:26.806396   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.806408   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:26.806415   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:26.806468   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:26.844289   58316 cri.go:89] found id: ""
	I0520 11:49:26.844313   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.844321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:26.844327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:26.844373   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:26.883496   58316 cri.go:89] found id: ""
	I0520 11:49:26.883523   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.883534   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:26.883541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:26.883600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:26.918427   58316 cri.go:89] found id: ""
	I0520 11:49:26.918451   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.918458   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:26.918465   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:26.918477   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:26.986423   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:26.986444   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:26.986457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:27.066909   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:27.066938   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.110874   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:27.110911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:27.167481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:27.167520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:29.682559   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:29.695460   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:29.695520   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:29.731189   58316 cri.go:89] found id: ""
	I0520 11:49:29.731218   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.731230   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:29.731238   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:29.731297   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:29.766067   58316 cri.go:89] found id: ""
	I0520 11:49:29.766109   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.766120   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:29.766131   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:29.766200   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:29.800840   58316 cri.go:89] found id: ""
	I0520 11:49:29.800868   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.800879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:29.800886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:29.800959   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:29.838971   58316 cri.go:89] found id: ""
	I0520 11:49:29.838999   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.839009   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:29.839017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:29.839078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:29.873954   58316 cri.go:89] found id: ""
	I0520 11:49:29.873977   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.873987   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:29.873994   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:29.874051   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:29.912574   58316 cri.go:89] found id: ""
	I0520 11:49:29.912602   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.912612   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:29.912620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:29.912687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:29.945949   58316 cri.go:89] found id: ""
	I0520 11:49:29.945980   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.945989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:29.945995   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:29.946043   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:29.979804   58316 cri.go:89] found id: ""
	I0520 11:49:29.979830   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.979847   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:29.979856   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:29.979868   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:30.027811   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:30.027844   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:30.043082   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:30.043115   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:30.114796   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:30.114827   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:30.114842   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:30.196523   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:30.196561   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.056582   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:29.057033   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:31.554824   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:28.862736   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:31.362469   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:32.885757   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.385655   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:32.737682   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:32.750903   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:32.750979   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:32.786874   58316 cri.go:89] found id: ""
	I0520 11:49:32.786897   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.786907   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:32.786915   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:32.786968   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:32.821758   58316 cri.go:89] found id: ""
	I0520 11:49:32.821783   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.821794   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:32.821800   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:32.821855   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:32.857680   58316 cri.go:89] found id: ""
	I0520 11:49:32.857700   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.857707   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:32.857712   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:32.857756   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:32.897970   58316 cri.go:89] found id: ""
	I0520 11:49:32.897990   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.897999   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:32.898004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:32.898059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:32.932971   58316 cri.go:89] found id: ""
	I0520 11:49:32.932999   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.933007   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:32.933015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:32.933067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:32.968103   58316 cri.go:89] found id: ""
	I0520 11:49:32.968132   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.968140   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:32.968147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:32.968199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:33.006101   58316 cri.go:89] found id: ""
	I0520 11:49:33.006125   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.006132   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:33.006137   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:33.006184   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:33.043611   58316 cri.go:89] found id: ""
	I0520 11:49:33.043639   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.043650   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:33.043659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:33.043677   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:33.119735   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.119758   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:33.119773   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:33.199361   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:33.199408   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:33.263421   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:33.263452   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:33.315644   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:33.315674   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:35.831034   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:35.845272   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:35.845343   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:35.882951   58316 cri.go:89] found id: ""
	I0520 11:49:35.882980   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.882988   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:35.882994   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:35.883042   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:35.919528   58316 cri.go:89] found id: ""
	I0520 11:49:35.919555   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.919564   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:35.919569   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:35.919618   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:35.957880   58316 cri.go:89] found id: ""
	I0520 11:49:35.957904   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.957912   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:35.957918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:35.957964   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:35.995267   58316 cri.go:89] found id: ""
	I0520 11:49:35.995291   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.995299   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:35.995308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:35.995367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:36.032979   58316 cri.go:89] found id: ""
	I0520 11:49:36.033011   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.033022   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:36.033030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:36.033092   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:36.071270   58316 cri.go:89] found id: ""
	I0520 11:49:36.071293   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.071303   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:36.071311   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:36.071434   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:36.105983   58316 cri.go:89] found id: ""
	I0520 11:49:36.106014   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.106024   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:36.106032   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:36.106094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:36.142978   58316 cri.go:89] found id: ""
	I0520 11:49:36.143006   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.143026   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:36.143037   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:36.143061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:36.227019   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:36.227055   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:36.269608   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:36.269646   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:36.321974   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:36.322009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:36.339332   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:36.339361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:36.417118   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.555033   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.555749   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:33.363377   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.863335   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:37.885668   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.385866   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:38.918143   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:38.933253   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:38.933325   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:38.988894   58316 cri.go:89] found id: ""
	I0520 11:49:38.988940   58316 logs.go:276] 0 containers: []
	W0520 11:49:38.988952   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:38.988961   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:38.989030   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:39.025902   58316 cri.go:89] found id: ""
	I0520 11:49:39.025928   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.025938   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:39.025945   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:39.026007   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:39.061920   58316 cri.go:89] found id: ""
	I0520 11:49:39.061945   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.061954   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:39.061961   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:39.062018   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:39.098125   58316 cri.go:89] found id: ""
	I0520 11:49:39.098152   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.098162   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:39.098169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:39.098229   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:39.139286   58316 cri.go:89] found id: ""
	I0520 11:49:39.139318   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.139328   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:39.139335   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:39.139404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:39.174437   58316 cri.go:89] found id: ""
	I0520 11:49:39.174468   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.174477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:39.174485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:39.174551   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:39.216176   58316 cri.go:89] found id: ""
	I0520 11:49:39.216211   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.216221   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:39.216228   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:39.216290   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:39.254213   58316 cri.go:89] found id: ""
	I0520 11:49:39.254235   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.254243   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:39.254252   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:39.254263   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:39.334325   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:39.334350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:39.334367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:39.411799   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:39.411830   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:39.455093   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:39.455118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:39.507557   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:39.507594   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:37.559287   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.055915   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:38.362779   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.862212   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.862459   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.385950   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:44.387966   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.021921   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:42.035757   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:42.035829   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:42.072072   58316 cri.go:89] found id: ""
	I0520 11:49:42.072103   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.072112   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:42.072118   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:42.072180   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:42.108038   58316 cri.go:89] found id: ""
	I0520 11:49:42.108061   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.108068   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:42.108074   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:42.108130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:42.146464   58316 cri.go:89] found id: ""
	I0520 11:49:42.146492   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.146503   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:42.146510   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:42.146578   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:42.182566   58316 cri.go:89] found id: ""
	I0520 11:49:42.182598   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.182608   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:42.182615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:42.182688   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:42.220159   58316 cri.go:89] found id: ""
	I0520 11:49:42.220191   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.220201   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:42.220209   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:42.220268   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:42.255952   58316 cri.go:89] found id: ""
	I0520 11:49:42.255975   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.255982   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:42.255988   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:42.256036   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:42.294347   58316 cri.go:89] found id: ""
	I0520 11:49:42.294376   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.294387   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:42.294394   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:42.294456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:42.332019   58316 cri.go:89] found id: ""
	I0520 11:49:42.332043   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.332050   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:42.332058   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:42.332074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.408479   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:42.408520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:42.445579   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:42.445620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:42.496278   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:42.496312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:42.510258   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:42.510292   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:42.582179   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.082674   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:45.096690   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:45.096749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:45.132031   58316 cri.go:89] found id: ""
	I0520 11:49:45.132054   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.132068   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:45.132074   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:45.132126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:45.167575   58316 cri.go:89] found id: ""
	I0520 11:49:45.167601   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.167612   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:45.167619   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:45.167679   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:45.202021   58316 cri.go:89] found id: ""
	I0520 11:49:45.202047   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.202054   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:45.202059   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:45.202113   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:45.236630   58316 cri.go:89] found id: ""
	I0520 11:49:45.236659   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.236669   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:45.236677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:45.236740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:45.272509   58316 cri.go:89] found id: ""
	I0520 11:49:45.272536   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.272543   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:45.272549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:45.272606   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:45.308221   58316 cri.go:89] found id: ""
	I0520 11:49:45.308242   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.308250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:45.308256   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:45.308298   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:45.344202   58316 cri.go:89] found id: ""
	I0520 11:49:45.344226   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.344233   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:45.344239   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:45.344299   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:45.392276   58316 cri.go:89] found id: ""
	I0520 11:49:45.392304   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.392315   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:45.392325   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:45.392336   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:45.451325   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:45.451355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:45.509835   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:45.509867   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:45.523790   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:45.523815   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:45.599837   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.599867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:45.599886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.056415   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:44.555806   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:45.363614   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:47.863671   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:46.884916   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:48.885117   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:48.175148   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:48.188998   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:48.189102   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:48.225492   58316 cri.go:89] found id: ""
	I0520 11:49:48.225528   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.225538   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:48.225548   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:48.225611   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:48.259197   58316 cri.go:89] found id: ""
	I0520 11:49:48.259221   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.259231   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:48.259237   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:48.259296   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:48.298760   58316 cri.go:89] found id: ""
	I0520 11:49:48.298786   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.298796   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:48.298803   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:48.298876   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:48.339188   58316 cri.go:89] found id: ""
	I0520 11:49:48.339217   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.339227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:48.339234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:48.339302   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:48.378659   58316 cri.go:89] found id: ""
	I0520 11:49:48.378689   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.378699   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:48.378706   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:48.378763   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:48.417637   58316 cri.go:89] found id: ""
	I0520 11:49:48.417664   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.417674   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:48.417681   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:48.417757   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:48.461412   58316 cri.go:89] found id: ""
	I0520 11:49:48.461439   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.461450   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:48.461457   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:48.461521   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:48.502919   58316 cri.go:89] found id: ""
	I0520 11:49:48.502942   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.502949   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:48.502958   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:48.502974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:48.557007   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:48.557037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:48.571457   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:48.571485   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:48.642339   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:48.642362   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:48.642379   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:48.720711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:48.720750   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:51.262685   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:51.276276   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:51.276352   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:51.313139   58316 cri.go:89] found id: ""
	I0520 11:49:51.313169   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.313178   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:51.313184   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:51.313233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:51.349742   58316 cri.go:89] found id: ""
	I0520 11:49:51.349771   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.349781   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:51.349789   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:51.349847   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:51.388819   58316 cri.go:89] found id: ""
	I0520 11:49:51.388856   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.388868   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:51.388875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:51.388958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:51.424734   58316 cri.go:89] found id: ""
	I0520 11:49:51.424769   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.424779   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:51.424789   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:51.424867   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:51.464135   58316 cri.go:89] found id: ""
	I0520 11:49:51.464163   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.464172   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:51.464178   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:51.464241   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:51.505948   58316 cri.go:89] found id: ""
	I0520 11:49:51.505974   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.505984   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:51.505992   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:51.506057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:51.542350   58316 cri.go:89] found id: ""
	I0520 11:49:51.542378   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.542389   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:51.542397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:51.542461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:51.577991   58316 cri.go:89] found id: ""
	I0520 11:49:51.578022   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.578031   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:51.578043   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:51.578058   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:51.630392   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:51.630427   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:51.645173   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:51.645198   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:49:47.056668   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:49.554453   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:51.555691   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:50.363307   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:52.862785   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:51.386280   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:53.386860   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:55.886047   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	W0520 11:49:51.715271   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:51.715297   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:51.715312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:51.798792   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:51.798831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.343631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:54.358716   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:54.358791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:54.394117   58316 cri.go:89] found id: ""
	I0520 11:49:54.394145   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.394154   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:54.394162   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:54.394219   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:54.435611   58316 cri.go:89] found id: ""
	I0520 11:49:54.435636   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.435644   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:54.435652   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:54.435711   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:54.469438   58316 cri.go:89] found id: ""
	I0520 11:49:54.469464   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.469474   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:54.469482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:54.469544   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:54.508113   58316 cri.go:89] found id: ""
	I0520 11:49:54.508143   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.508154   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:54.508161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:54.508223   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:54.544504   58316 cri.go:89] found id: ""
	I0520 11:49:54.544528   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.544536   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:54.544541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:54.544598   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:54.584691   58316 cri.go:89] found id: ""
	I0520 11:49:54.584725   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.584735   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:54.584743   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:54.584804   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:54.618505   58316 cri.go:89] found id: ""
	I0520 11:49:54.618525   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.618532   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:54.618538   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:54.618589   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:54.652670   58316 cri.go:89] found id: ""
	I0520 11:49:54.652701   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.652711   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:54.652721   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:54.652741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.692782   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:54.692808   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:54.746300   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:54.746340   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:54.760324   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:54.760357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:54.835091   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:54.835118   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:54.835130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:54.055131   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:56.055386   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:54.863545   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:57.363023   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:58.385457   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:00.385971   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:57.422176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:57.437369   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:57.437429   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:57.472083   58316 cri.go:89] found id: ""
	I0520 11:49:57.472119   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.472130   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:57.472137   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:57.472197   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:57.507307   58316 cri.go:89] found id: ""
	I0520 11:49:57.507332   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.507339   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:57.507345   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:57.507405   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:57.548112   58316 cri.go:89] found id: ""
	I0520 11:49:57.548144   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.548155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:57.548162   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:57.548227   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:57.585751   58316 cri.go:89] found id: ""
	I0520 11:49:57.585790   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.585801   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:57.585809   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:57.585879   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:57.626577   58316 cri.go:89] found id: ""
	I0520 11:49:57.626609   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.626620   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:57.626628   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:57.626687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:57.662402   58316 cri.go:89] found id: ""
	I0520 11:49:57.662429   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.662436   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:57.662441   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:57.662503   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:57.695393   58316 cri.go:89] found id: ""
	I0520 11:49:57.695417   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.695424   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:57.695429   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:57.695487   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:57.730011   58316 cri.go:89] found id: ""
	I0520 11:49:57.730039   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.730049   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:57.730059   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:57.730070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:57.807727   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:57.807763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:57.853562   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:57.853595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:57.909322   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:57.909357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:57.923277   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:57.923305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:57.994858   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.495176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:00.508821   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:00.508889   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:00.543001   58316 cri.go:89] found id: ""
	I0520 11:50:00.543028   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.543045   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:00.543057   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:00.543117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:00.582547   58316 cri.go:89] found id: ""
	I0520 11:50:00.582574   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.582590   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:00.582598   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:00.582659   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:00.617962   58316 cri.go:89] found id: ""
	I0520 11:50:00.617993   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.618003   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:00.618011   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:00.618073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:00.655065   58316 cri.go:89] found id: ""
	I0520 11:50:00.655099   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.655110   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:00.655117   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:00.655177   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:00.689497   58316 cri.go:89] found id: ""
	I0520 11:50:00.689521   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.689528   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:00.689533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:00.689586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:00.723710   58316 cri.go:89] found id: ""
	I0520 11:50:00.723743   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.723754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:00.723761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:00.723821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:00.757900   58316 cri.go:89] found id: ""
	I0520 11:50:00.757943   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.757950   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:00.757956   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:00.758014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:00.792196   58316 cri.go:89] found id: ""
	I0520 11:50:00.792223   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.792232   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:00.792242   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:00.792257   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:00.845759   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:00.845791   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:00.860633   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:00.860658   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:00.936030   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.936050   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:00.936061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:01.011562   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:01.011602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:58.554785   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:00.555726   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:59.365132   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:01.863538   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:02.884908   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:05.384186   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:03.554533   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:03.568017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:03.568086   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:03.605583   58316 cri.go:89] found id: ""
	I0520 11:50:03.605610   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.605619   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:03.605626   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:03.605683   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.640875   58316 cri.go:89] found id: ""
	I0520 11:50:03.640905   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.640915   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:03.640939   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:03.641004   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:03.676064   58316 cri.go:89] found id: ""
	I0520 11:50:03.676088   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.676096   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:03.676103   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:03.676155   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:03.710791   58316 cri.go:89] found id: ""
	I0520 11:50:03.710815   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.710823   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:03.710828   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:03.710883   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:03.746348   58316 cri.go:89] found id: ""
	I0520 11:50:03.746375   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.746386   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:03.746392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:03.746451   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:03.782220   58316 cri.go:89] found id: ""
	I0520 11:50:03.782244   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.782253   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:03.782259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:03.782319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:03.819678   58316 cri.go:89] found id: ""
	I0520 11:50:03.819712   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.819724   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:03.819731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:03.819789   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:03.856137   58316 cri.go:89] found id: ""
	I0520 11:50:03.856163   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.856172   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:03.856180   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:03.856191   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:03.914360   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:03.914391   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:03.929401   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:03.929431   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:03.999763   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:03.999882   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:03.999905   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:04.076921   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:04.076971   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:06.623577   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:06.636265   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:06.636329   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:06.670653   58316 cri.go:89] found id: ""
	I0520 11:50:06.670682   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.670692   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:06.670698   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:06.670790   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.054799   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:05.056653   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:03.865619   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:06.362355   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:07.387760   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:09.885607   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:06.703498   58316 cri.go:89] found id: ""
	I0520 11:50:06.703528   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.703537   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:06.703543   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:06.703600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:06.736579   58316 cri.go:89] found id: ""
	I0520 11:50:06.736606   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.736615   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:06.736620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:06.736687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:06.774584   58316 cri.go:89] found id: ""
	I0520 11:50:06.774608   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.774615   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:06.774625   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:06.774682   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:06.810565   58316 cri.go:89] found id: ""
	I0520 11:50:06.810591   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.810599   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:06.810604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:06.810666   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:06.845063   58316 cri.go:89] found id: ""
	I0520 11:50:06.845101   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.845113   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:06.845121   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:06.845208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:06.884560   58316 cri.go:89] found id: ""
	I0520 11:50:06.884585   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.884592   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:06.884597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:06.884650   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:06.923422   58316 cri.go:89] found id: ""
	I0520 11:50:06.923450   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.923461   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:06.923474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:06.923490   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:06.937831   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:06.937856   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:07.008877   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.008897   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:07.008907   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:07.091174   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:07.091208   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:07.128889   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:07.128922   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:09.686881   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:09.700771   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:09.700842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:09.737129   58316 cri.go:89] found id: ""
	I0520 11:50:09.737156   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.737166   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:09.737172   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:09.737233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:09.772130   58316 cri.go:89] found id: ""
	I0520 11:50:09.772164   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.772175   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:09.772182   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:09.772251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:09.811033   58316 cri.go:89] found id: ""
	I0520 11:50:09.811059   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.811066   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:09.811072   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:09.811127   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:09.849809   58316 cri.go:89] found id: ""
	I0520 11:50:09.849833   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.849841   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:09.849849   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:09.849911   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:09.888216   58316 cri.go:89] found id: ""
	I0520 11:50:09.888241   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.888251   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:09.888258   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:09.888319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:09.925704   58316 cri.go:89] found id: ""
	I0520 11:50:09.925733   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.925742   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:09.925749   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:09.925807   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:09.961823   58316 cri.go:89] found id: ""
	I0520 11:50:09.961846   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.961853   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:09.961858   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:09.961913   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:09.998256   58316 cri.go:89] found id: ""
	I0520 11:50:09.998280   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.998288   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:09.998304   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:09.998319   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:10.077998   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:10.078041   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:10.116358   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:10.116383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:10.169638   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:10.169670   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:10.184056   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:10.184082   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:10.256798   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.555902   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:09.558035   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:08.366901   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:10.862545   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:12.862664   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:12.757663   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:12.772331   58316 kubeadm.go:591] duration metric: took 4m3.848225092s to restartPrimaryControlPlane
	W0520 11:50:12.772412   58316 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:50:12.772450   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:50:13.531871   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:50:13.547475   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:50:13.558690   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:50:13.568708   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:50:13.568730   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:50:13.568772   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:50:13.579016   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:50:13.579078   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:50:13.590821   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:50:13.600567   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:50:13.600614   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:50:13.610747   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.619942   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:50:13.619981   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.629831   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:50:13.638940   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:50:13.638991   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:50:13.648962   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:50:13.714817   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:50:13.714873   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:50:13.872737   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:50:13.872919   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:50:13.873055   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:50:14.078057   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:50:14.080202   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:50:14.080339   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:50:14.080431   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:50:14.080563   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:50:14.080660   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:50:14.080763   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:50:14.080842   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:50:14.081017   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:50:14.081620   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:50:14.081988   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:50:14.082494   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:50:14.082552   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:50:14.082602   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:50:14.157464   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:50:14.556498   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:50:14.754304   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:50:15.061126   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:50:15.079978   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:50:15.081069   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:50:15.081248   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:50:15.222910   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:50:12.385235   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.886286   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:15.225078   58316 out.go:204]   - Booting up control plane ...
	I0520 11:50:15.225186   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:50:15.232621   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:50:15.233678   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:50:15.234394   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:50:15.236632   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:50:12.055821   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.556030   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.865340   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.362603   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.388804   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.885030   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.056232   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.555242   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.555339   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.363446   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.861989   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.886042   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:24.384353   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:23.555974   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:26.054899   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:23.862506   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:25.862952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:26.384558   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.384818   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.387241   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.055327   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.055628   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.365187   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.862199   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.863892   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.885393   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.384549   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.555087   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.055509   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.362103   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.362430   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.386206   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.386258   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.055828   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.554574   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.554821   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.862750   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.862973   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.885277   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:43.886162   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:43.555316   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:45.556417   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:44.364091   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:46.364286   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:46.385011   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.385754   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:50.387956   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.055237   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:50.555567   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.362404   58398 pod_ready.go:81] duration metric: took 4m0.006145654s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	E0520 11:50:48.362424   58398 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:50:48.362431   58398 pod_ready.go:38] duration metric: took 4m3.605442277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:50:48.362444   58398 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:50:48.362469   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:48.362528   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:48.422343   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:48.422361   58398 cri.go:89] found id: ""
	I0520 11:50:48.422369   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:48.422428   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.427448   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:48.427512   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:48.477327   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:48.477353   58398 cri.go:89] found id: ""
	I0520 11:50:48.477361   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:48.477418   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.482113   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:48.482161   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:48.525019   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:48.525039   58398 cri.go:89] found id: ""
	I0520 11:50:48.525046   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:48.525087   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.529544   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:48.529616   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:48.571031   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:48.571055   58398 cri.go:89] found id: ""
	I0520 11:50:48.571065   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:48.571117   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.575601   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:48.575653   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:48.617086   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:48.617111   58398 cri.go:89] found id: ""
	I0520 11:50:48.617120   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:48.617174   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.623512   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:48.623585   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:48.660141   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:48.660170   58398 cri.go:89] found id: ""
	I0520 11:50:48.660179   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:48.660236   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.665578   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:48.665648   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:48.704366   58398 cri.go:89] found id: ""
	I0520 11:50:48.704398   58398 logs.go:276] 0 containers: []
	W0520 11:50:48.704408   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:48.704418   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:48.704482   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:48.745897   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:48.745925   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:48.745931   58398 cri.go:89] found id: ""
	I0520 11:50:48.745940   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:48.745997   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.750982   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.755064   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:48.755084   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:48.804692   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:48.804720   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:48.862735   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:48.862765   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:48.914854   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:48.914885   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:48.930449   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:48.930473   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:48.980371   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:48.980398   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:49.019847   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:49.019871   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:49.066103   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:49.066131   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:49.102912   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:49.102938   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:49.148380   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:49.148403   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:49.184436   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:49.184466   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:49.658226   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:49.658267   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:49.715674   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:49.715724   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:52.360868   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:52.378573   58398 api_server.go:72] duration metric: took 4m14.818629715s to wait for apiserver process to appear ...
	I0520 11:50:52.378595   58398 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:50:52.378634   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:52.378694   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:52.423033   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:52.423057   58398 cri.go:89] found id: ""
	I0520 11:50:52.423067   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:52.423124   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.427361   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:52.427422   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:52.472357   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:52.472383   58398 cri.go:89] found id: ""
	I0520 11:50:52.472393   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:52.472452   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.477472   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:52.477536   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:52.517721   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:52.517742   58398 cri.go:89] found id: ""
	I0520 11:50:52.517750   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:52.517806   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.522347   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:52.522415   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:52.562720   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:52.562747   58398 cri.go:89] found id: ""
	I0520 11:50:52.562757   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:52.562815   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.567322   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:52.567378   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:52.605752   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:52.605779   58398 cri.go:89] found id: ""
	I0520 11:50:52.605787   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:52.605841   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.609932   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:52.609996   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:52.646704   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:52.646734   58398 cri.go:89] found id: ""
	I0520 11:50:52.646745   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:52.646805   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.651422   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:52.651493   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:52.690880   58398 cri.go:89] found id: ""
	I0520 11:50:52.690909   58398 logs.go:276] 0 containers: []
	W0520 11:50:52.690919   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:52.690927   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:52.690997   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:52.733525   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:52.733558   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:52.733562   58398 cri.go:89] found id: ""
	I0520 11:50:52.733568   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:52.733615   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.738004   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.743086   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:52.743108   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:52.795042   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:52.795078   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:52.809356   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:52.809384   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:52.854486   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:52.854517   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:52.904266   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:52.904289   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:52.941257   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:52.941285   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:52.886142   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.385782   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.237878   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:50:55.238408   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:50:55.238664   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:50:53.057541   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.554909   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:52.978583   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:52.978613   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:53.081246   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:53.081276   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:53.124472   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:53.124509   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:53.167027   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:53.167055   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:53.223091   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:53.223125   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:53.261390   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:53.261417   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:53.680820   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:53.680877   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:56.227910   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:50:56.233856   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I0520 11:50:56.234827   58398 api_server.go:141] control plane version: v1.30.1
	I0520 11:50:56.234852   58398 api_server.go:131] duration metric: took 3.856250847s to wait for apiserver health ...
	I0520 11:50:56.234859   58398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:50:56.234880   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:56.234923   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:56.273520   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:56.273539   58398 cri.go:89] found id: ""
	I0520 11:50:56.273545   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:56.273589   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.278121   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:56.278181   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:56.311934   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:56.311958   58398 cri.go:89] found id: ""
	I0520 11:50:56.311966   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:56.312011   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.315972   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:56.316033   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:56.351237   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:56.351259   58398 cri.go:89] found id: ""
	I0520 11:50:56.351268   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:56.351323   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.355480   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:56.355528   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:56.389768   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:56.389782   58398 cri.go:89] found id: ""
	I0520 11:50:56.389789   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:56.389829   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.393815   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:56.393869   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:56.436557   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:56.436582   58398 cri.go:89] found id: ""
	I0520 11:50:56.436592   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:56.436637   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.440866   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:56.440943   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:56.480688   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:56.480734   58398 cri.go:89] found id: ""
	I0520 11:50:56.480744   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:56.480814   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.485880   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:56.485974   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:56.521965   58398 cri.go:89] found id: ""
	I0520 11:50:56.521992   58398 logs.go:276] 0 containers: []
	W0520 11:50:56.521999   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:56.522005   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:56.522063   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:56.567585   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:56.567611   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:56.567617   58398 cri.go:89] found id: ""
	I0520 11:50:56.567627   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:56.567677   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.572972   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.577459   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:56.577484   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:56.636435   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:56.636467   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:56.675690   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:56.675714   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:56.720035   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:56.720065   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:56.734275   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:56.734302   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:56.837237   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:56.837268   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:56.885482   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:56.885517   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:56.922276   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:56.922302   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:56.971119   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:56.971148   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:57.010383   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:57.010414   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:57.044880   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:57.044909   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:57.097906   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:57.097936   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:57.147600   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:57.147626   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:59.998576   58398 system_pods.go:59] 8 kube-system pods found
	I0520 11:50:59.998601   58398 system_pods.go:61] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running
	I0520 11:50:59.998605   58398 system_pods.go:61] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running
	I0520 11:50:59.998609   58398 system_pods.go:61] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running
	I0520 11:50:59.998613   58398 system_pods.go:61] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running
	I0520 11:50:59.998617   58398 system_pods.go:61] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running
	I0520 11:50:59.998621   58398 system_pods.go:61] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running
	I0520 11:50:59.998627   58398 system_pods.go:61] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:50:59.998631   58398 system_pods.go:61] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running
	I0520 11:50:59.998639   58398 system_pods.go:74] duration metric: took 3.763773626s to wait for pod list to return data ...
	I0520 11:50:59.998645   58398 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:51:00.001599   58398 default_sa.go:45] found service account: "default"
	I0520 11:51:00.001619   58398 default_sa.go:55] duration metric: took 2.967907ms for default service account to be created ...
	I0520 11:51:00.001626   58398 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:51:00.007124   58398 system_pods.go:86] 8 kube-system pods found
	I0520 11:51:00.007151   58398 system_pods.go:89] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running
	I0520 11:51:00.007156   58398 system_pods.go:89] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running
	I0520 11:51:00.007161   58398 system_pods.go:89] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running
	I0520 11:51:00.007168   58398 system_pods.go:89] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running
	I0520 11:51:00.007175   58398 system_pods.go:89] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running
	I0520 11:51:00.007182   58398 system_pods.go:89] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running
	I0520 11:51:00.007194   58398 system_pods.go:89] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:00.007205   58398 system_pods.go:89] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running
	I0520 11:51:00.007215   58398 system_pods.go:126] duration metric: took 5.582307ms to wait for k8s-apps to be running ...
	I0520 11:51:00.007226   58398 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:51:00.007281   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:51:00.023780   58398 system_svc.go:56] duration metric: took 16.545966ms WaitForService to wait for kubelet
	I0520 11:51:00.023806   58398 kubeadm.go:576] duration metric: took 4m22.463864802s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:51:00.023830   58398 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:51:00.026757   58398 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:51:00.026775   58398 node_conditions.go:123] node cpu capacity is 2
	I0520 11:51:00.026788   58398 node_conditions.go:105] duration metric: took 2.951465ms to run NodePressure ...
	I0520 11:51:00.026798   58398 start.go:240] waiting for startup goroutines ...
	I0520 11:51:00.026804   58398 start.go:245] waiting for cluster config update ...
	I0520 11:51:00.026814   58398 start.go:254] writing updated cluster config ...
	I0520 11:51:00.027094   58398 ssh_runner.go:195] Run: rm -f paused
	I0520 11:51:00.078104   58398 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:51:00.079923   58398 out.go:177] * Done! kubectl is now configured to use "embed-certs-274976" cluster and "default" namespace by default
	I0520 11:50:57.885337   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:00.385874   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:00.239814   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:00.240101   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:50:57.554979   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:59.555797   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:02.886892   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:05.384747   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:02.060543   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:04.555457   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:06.556619   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:07.386407   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:09.885284   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:10.240955   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:10.241197   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:09.055313   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:11.056015   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:11.886887   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:13.887204   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:15.888591   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:12.055930   58840 pod_ready.go:81] duration metric: took 4m0.00701193s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	E0520 11:51:12.055967   58840 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:51:12.055978   58840 pod_ready.go:38] duration metric: took 4m5.378750595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:51:12.055998   58840 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:51:12.056034   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:12.056101   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:12.115475   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:12.115503   58840 cri.go:89] found id: ""
	I0520 11:51:12.115513   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:12.115568   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.120850   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:12.120940   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:12.161022   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:12.161055   58840 cri.go:89] found id: ""
	I0520 11:51:12.161062   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:12.161130   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.165525   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:12.165586   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:12.211862   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:12.211890   58840 cri.go:89] found id: ""
	I0520 11:51:12.211899   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:12.211959   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.217037   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:12.217114   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:12.256985   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:12.257010   58840 cri.go:89] found id: ""
	I0520 11:51:12.257018   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:12.257064   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.261765   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:12.261825   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:12.299745   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:12.299773   58840 cri.go:89] found id: ""
	I0520 11:51:12.299781   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:12.299842   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.307681   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:12.307752   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:12.346729   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:12.346754   58840 cri.go:89] found id: ""
	I0520 11:51:12.346763   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:12.346819   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.352034   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:12.352096   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:12.394387   58840 cri.go:89] found id: ""
	I0520 11:51:12.394405   58840 logs.go:276] 0 containers: []
	W0520 11:51:12.394412   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:12.394416   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:12.394460   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:12.434198   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:12.434224   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:12.434229   58840 cri.go:89] found id: ""
	I0520 11:51:12.434238   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:12.434298   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.439424   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.444096   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:12.444113   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:12.501635   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:12.501668   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:12.516529   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:12.516557   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:12.662690   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:12.662723   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:12.724415   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:12.724455   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:12.767417   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:12.767446   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:12.815739   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:12.815775   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:12.860515   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:12.860543   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:12.900348   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:12.900377   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:12.948489   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:12.948518   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:12.996999   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:12.997030   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:13.034866   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:13.034896   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:13.532828   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:13.532875   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:16.083150   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:51:16.106110   58840 api_server.go:72] duration metric: took 4m16.64390235s to wait for apiserver process to appear ...
	I0520 11:51:16.106155   58840 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:51:16.106198   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:16.106262   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:16.148418   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:16.148450   58840 cri.go:89] found id: ""
	I0520 11:51:16.148459   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:16.148527   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.153400   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:16.153456   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:16.196145   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:16.196167   58840 cri.go:89] found id: ""
	I0520 11:51:16.196175   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:16.196231   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.200792   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:16.200857   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:16.244411   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:16.244435   58840 cri.go:89] found id: ""
	I0520 11:51:16.244446   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:16.244497   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.249637   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:16.249707   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:16.288520   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:16.288542   58840 cri.go:89] found id: ""
	I0520 11:51:16.288550   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:16.288613   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.293307   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:16.293377   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:16.336963   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:16.336988   58840 cri.go:89] found id: ""
	I0520 11:51:16.336998   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:16.337053   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.342155   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:16.342236   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:16.390589   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:16.390610   58840 cri.go:89] found id: ""
	I0520 11:51:16.390617   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:16.390672   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.395319   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:16.395367   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:16.444947   58840 cri.go:89] found id: ""
	I0520 11:51:16.444974   58840 logs.go:276] 0 containers: []
	W0520 11:51:16.444985   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:16.445005   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:16.445069   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:16.485389   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:16.485414   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:16.485418   58840 cri.go:89] found id: ""
	I0520 11:51:16.485425   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:16.485481   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.490036   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.494650   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:16.494678   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:16.510820   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:16.510847   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:16.549583   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:16.549616   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:16.593009   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:16.593036   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:16.644523   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:16.644551   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:16.686326   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:16.686355   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:16.753308   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:16.753339   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:16.799568   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:16.799600   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:16.858422   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:16.858454   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:18.385869   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:20.386467   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:16.983556   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:16.983591   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:17.034851   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:17.034885   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:17.078220   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:17.078247   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:17.120946   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:17.120979   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:20.055408   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:51:20.059619   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 200:
	ok
	I0520 11:51:20.060649   58840 api_server.go:141] control plane version: v1.30.1
	I0520 11:51:20.060671   58840 api_server.go:131] duration metric: took 3.954509172s to wait for apiserver health ...
	I0520 11:51:20.060679   58840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:51:20.060698   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:20.060745   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:20.099666   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:20.099687   58840 cri.go:89] found id: ""
	I0520 11:51:20.099695   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:20.099746   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.104235   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:20.104286   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:20.162860   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:20.162881   58840 cri.go:89] found id: ""
	I0520 11:51:20.162887   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:20.162936   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.168215   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:20.168267   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:20.208871   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:20.208900   58840 cri.go:89] found id: ""
	I0520 11:51:20.208909   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:20.208982   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.213637   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:20.213717   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:20.253194   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:20.253219   58840 cri.go:89] found id: ""
	I0520 11:51:20.253230   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:20.253286   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.258190   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:20.258261   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:20.299693   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:20.299713   58840 cri.go:89] found id: ""
	I0520 11:51:20.299720   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:20.299779   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.308104   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:20.308165   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:20.348954   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:20.348977   58840 cri.go:89] found id: ""
	I0520 11:51:20.348985   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:20.349030   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.353738   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:20.353798   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:20.388466   58840 cri.go:89] found id: ""
	I0520 11:51:20.388486   58840 logs.go:276] 0 containers: []
	W0520 11:51:20.388496   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:20.388504   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:20.388564   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:20.430628   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:20.430653   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:20.430659   58840 cri.go:89] found id: ""
	I0520 11:51:20.430668   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:20.430714   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.435241   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.439312   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:20.439332   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:20.476361   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:20.476388   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:20.514317   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:20.514345   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:20.566557   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:20.566604   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:20.582249   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:20.582279   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:20.639530   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:20.639568   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:20.676804   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:20.676837   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:20.713661   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:20.713691   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:20.757100   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:20.757125   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:20.805824   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:20.805859   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:20.921508   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:20.921540   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:20.968752   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:20.968785   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:21.024773   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:21.024802   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:23.915504   58840 system_pods.go:59] 8 kube-system pods found
	I0520 11:51:23.915536   58840 system_pods.go:61] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running
	I0520 11:51:23.915540   58840 system_pods.go:61] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running
	I0520 11:51:23.915544   58840 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running
	I0520 11:51:23.915549   58840 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running
	I0520 11:51:23.915552   58840 system_pods.go:61] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:51:23.915555   58840 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running
	I0520 11:51:23.915562   58840 system_pods.go:61] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:23.915566   58840 system_pods.go:61] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:51:23.915574   58840 system_pods.go:74] duration metric: took 3.854889521s to wait for pod list to return data ...
	I0520 11:51:23.915582   58840 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:51:23.917651   58840 default_sa.go:45] found service account: "default"
	I0520 11:51:23.917672   58840 default_sa.go:55] duration metric: took 2.084872ms for default service account to be created ...
	I0520 11:51:23.917679   58840 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:51:23.922566   58840 system_pods.go:86] 8 kube-system pods found
	I0520 11:51:23.922589   58840 system_pods.go:89] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running
	I0520 11:51:23.922594   58840 system_pods.go:89] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running
	I0520 11:51:23.922599   58840 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running
	I0520 11:51:23.922604   58840 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running
	I0520 11:51:23.922608   58840 system_pods.go:89] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:51:23.922612   58840 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running
	I0520 11:51:23.922618   58840 system_pods.go:89] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:23.922623   58840 system_pods.go:89] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:51:23.922630   58840 system_pods.go:126] duration metric: took 4.946206ms to wait for k8s-apps to be running ...
	I0520 11:51:23.922638   58840 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:51:23.922679   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:51:23.941348   58840 system_svc.go:56] duration metric: took 18.700356ms WaitForService to wait for kubelet
	I0520 11:51:23.941388   58840 kubeadm.go:576] duration metric: took 4m24.479184818s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:51:23.941414   58840 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:51:23.945372   58840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:51:23.945400   58840 node_conditions.go:123] node cpu capacity is 2
	I0520 11:51:23.945414   58840 node_conditions.go:105] duration metric: took 3.993897ms to run NodePressure ...
	I0520 11:51:23.945428   58840 start.go:240] waiting for startup goroutines ...
	I0520 11:51:23.945437   58840 start.go:245] waiting for cluster config update ...
	I0520 11:51:23.945450   58840 start.go:254] writing updated cluster config ...
	I0520 11:51:23.945739   58840 ssh_runner.go:195] Run: rm -f paused
	I0520 11:51:23.997255   58840 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:51:23.999485   58840 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-668445" cluster and "default" namespace by default
	I0520 11:51:22.386549   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:24.885200   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:26.887178   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:29.384900   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:30.242152   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:30.242431   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:31.386214   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:33.386309   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:35.379767   57519 pod_ready.go:81] duration metric: took 4m0.000946428s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" ...
	E0520 11:51:35.379806   57519 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0520 11:51:35.379826   57519 pod_ready.go:38] duration metric: took 4m4.525892516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:51:35.379854   57519 kubeadm.go:591] duration metric: took 4m13.966188007s to restartPrimaryControlPlane
	W0520 11:51:35.379905   57519 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:51:35.379929   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:07.074203   57519 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.694253213s)
	I0520 11:52:07.074270   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:07.089871   57519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:52:07.100708   57519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:07.111260   57519 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:07.111283   57519 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:07.111333   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:07.121864   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:07.121912   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:07.131426   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:07.140637   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:07.140693   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:07.150305   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:07.160398   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:07.160464   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:07.170170   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:07.179684   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:07.179740   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:07.189036   57519 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:07.242553   57519 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 11:52:07.242606   57519 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:07.396088   57519 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:07.396232   57519 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:07.396391   57519 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:07.612882   57519 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:07.614941   57519 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:07.615033   57519 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:07.615124   57519 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:07.615249   57519 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:07.615353   57519 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:07.615465   57519 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:07.615543   57519 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:07.615641   57519 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:07.615726   57519 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:07.615815   57519 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:07.615915   57519 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:07.615970   57519 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:07.616047   57519 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:07.744190   57519 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:08.012140   57519 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 11:52:08.080751   57519 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:08.369179   57519 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:08.507095   57519 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:08.507736   57519 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:08.510271   57519 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:10.244270   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:10.244870   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:10.244909   58316 kubeadm.go:309] 
	I0520 11:52:10.245019   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:52:10.245121   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:52:10.245131   58316 kubeadm.go:309] 
	I0520 11:52:10.245213   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:52:10.245293   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:52:10.245550   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:52:10.245566   58316 kubeadm.go:309] 
	I0520 11:52:10.245819   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:52:10.245900   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:52:10.245977   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:52:10.245987   58316 kubeadm.go:309] 
	I0520 11:52:10.246241   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:52:10.246437   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:52:10.246450   58316 kubeadm.go:309] 
	I0520 11:52:10.246702   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:52:10.246915   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:52:10.247106   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:52:10.247366   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:52:10.247396   58316 kubeadm.go:309] 
	I0520 11:52:10.247656   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:10.247823   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:52:10.248110   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0520 11:52:10.248219   58316 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0520 11:52:10.248277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:10.736634   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:10.753534   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:10.766627   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:10.766653   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:10.766706   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:10.778676   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:10.778728   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:10.791073   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:10.801084   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:10.801142   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:10.811127   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.820319   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:10.820389   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.830024   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:10.839789   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:10.839850   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:10.849670   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:10.930808   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:52:10.930892   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:11.097490   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:11.097648   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:11.097813   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:11.296124   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:08.512322   57519 out.go:204]   - Booting up control plane ...
	I0520 11:52:08.512437   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:08.512530   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:08.512609   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:08.533828   57519 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:08.534843   57519 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:08.534903   57519 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:08.670084   57519 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 11:52:08.670192   57519 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 11:52:09.172058   57519 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.003696ms
	I0520 11:52:09.172181   57519 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 11:52:11.298250   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:11.298369   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:11.298456   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:11.300821   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:11.300961   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:11.301094   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:11.301178   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:11.301283   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:11.301374   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:11.301475   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:11.301589   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:11.301645   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:11.301723   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:11.441179   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:11.617104   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:11.918441   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:12.107514   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:12.132712   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:12.134501   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:12.134577   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:12.307559   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:14.174127   57519 kubeadm.go:309] [api-check] The API server is healthy after 5.002000599s
	I0520 11:52:14.189265   57519 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 11:52:14.707457   57519 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 11:52:14.731273   57519 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 11:52:14.731480   57519 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-814599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 11:52:14.746881   57519 kubeadm.go:309] [bootstrap-token] Using token: 3eyb61.mp2wh3ubkzpzouqm
	I0520 11:52:14.748381   57519 out.go:204]   - Configuring RBAC rules ...
	I0520 11:52:14.748539   57519 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 11:52:14.764426   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 11:52:14.773450   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 11:52:14.778770   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 11:52:14.782819   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 11:52:14.787255   57519 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 11:52:14.900560   57519 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 11:52:15.342284   57519 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 11:52:15.901553   57519 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 11:52:15.904310   57519 kubeadm.go:309] 
	I0520 11:52:15.904380   57519 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 11:52:15.904389   57519 kubeadm.go:309] 
	I0520 11:52:15.904507   57519 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 11:52:15.904545   57519 kubeadm.go:309] 
	I0520 11:52:15.904578   57519 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 11:52:15.904653   57519 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 11:52:15.904713   57519 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 11:52:15.904723   57519 kubeadm.go:309] 
	I0520 11:52:15.904813   57519 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 11:52:15.904824   57519 kubeadm.go:309] 
	I0520 11:52:15.904910   57519 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 11:52:15.904935   57519 kubeadm.go:309] 
	I0520 11:52:15.905012   57519 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 11:52:15.905113   57519 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 11:52:15.905218   57519 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 11:52:15.905239   57519 kubeadm.go:309] 
	I0520 11:52:15.905326   57519 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 11:52:15.905433   57519 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 11:52:15.905443   57519 kubeadm.go:309] 
	I0520 11:52:15.905534   57519 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3eyb61.mp2wh3ubkzpzouqm \
	I0520 11:52:15.905675   57519 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 11:52:15.905725   57519 kubeadm.go:309] 	--control-plane 
	I0520 11:52:15.905738   57519 kubeadm.go:309] 
	I0520 11:52:15.905851   57519 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 11:52:15.905862   57519 kubeadm.go:309] 
	I0520 11:52:15.905989   57519 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3eyb61.mp2wh3ubkzpzouqm \
	I0520 11:52:15.906131   57519 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 11:52:15.907657   57519 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:15.907690   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:52:15.907703   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:52:15.909664   57519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:52:15.911132   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:52:15.927634   57519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:52:15.946135   57519 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:52:15.946282   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:15.946298   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-814599 minikube.k8s.io/updated_at=2024_05_20T11_52_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=no-preload-814599 minikube.k8s.io/primary=true
	I0520 11:52:16.161431   57519 ops.go:34] apiserver oom_adj: -16
	I0520 11:52:16.166400   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:12.309392   58316 out.go:204]   - Booting up control plane ...
	I0520 11:52:12.309529   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:12.321163   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:12.322464   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:12.323504   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:12.326643   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:52:16.666687   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:17.167351   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:17.666800   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:18.167094   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:18.667154   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:19.167261   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:19.666780   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:20.167002   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:20.667078   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:21.167229   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:21.666511   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:22.167275   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:22.666512   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:23.166471   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:23.667393   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:24.167417   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:24.667387   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:25.167046   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:25.666728   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:26.167118   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:26.667450   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:27.166833   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:27.666776   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.167244   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.666741   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.758525   57519 kubeadm.go:1107] duration metric: took 12.81228323s to wait for elevateKubeSystemPrivileges
	W0520 11:52:28.758571   57519 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 11:52:28.758582   57519 kubeadm.go:393] duration metric: took 5m7.403869165s to StartCluster
	I0520 11:52:28.758599   57519 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:28.758679   57519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:52:28.760488   57519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:28.760723   57519 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:52:28.762410   57519 out.go:177] * Verifying Kubernetes components...
	I0520 11:52:28.760783   57519 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:52:28.760966   57519 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:52:28.763493   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:52:28.763499   57519 addons.go:69] Setting storage-provisioner=true in profile "no-preload-814599"
	I0520 11:52:28.763527   57519 addons.go:234] Setting addon storage-provisioner=true in "no-preload-814599"
	I0520 11:52:28.763528   57519 addons.go:69] Setting metrics-server=true in profile "no-preload-814599"
	W0520 11:52:28.763537   57519 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:52:28.763527   57519 addons.go:69] Setting default-storageclass=true in profile "no-preload-814599"
	I0520 11:52:28.763558   57519 addons.go:234] Setting addon metrics-server=true in "no-preload-814599"
	I0520 11:52:28.763562   57519 host.go:66] Checking if "no-preload-814599" exists ...
	W0520 11:52:28.763568   57519 addons.go:243] addon metrics-server should already be in state true
	I0520 11:52:28.763582   57519 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-814599"
	I0520 11:52:28.763589   57519 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:52:28.763934   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.763952   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.763984   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.764022   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.764047   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.764081   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.778949   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0520 11:52:28.778995   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0520 11:52:28.779481   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.779522   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.780023   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.780053   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.780159   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.780180   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.780498   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.780532   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.780680   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.780720   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0520 11:52:28.781084   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.781094   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.781127   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.781605   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.781627   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.781951   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.782608   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.782651   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.784981   57519 addons.go:234] Setting addon default-storageclass=true in "no-preload-814599"
	W0520 11:52:28.785010   57519 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:52:28.785041   57519 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:52:28.785399   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.785423   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.797463   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0520 11:52:28.797488   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I0520 11:52:28.797958   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.798050   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.798571   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.798589   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.798634   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.798647   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.799040   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.799043   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.799206   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.799250   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.802002   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.804188   57519 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:52:28.802940   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.805408   57519 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:52:28.804469   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0520 11:52:28.805363   57519 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:52:28.806586   57519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:52:28.806592   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:52:28.806601   57519 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:52:28.806607   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.806613   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.806965   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.807364   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.807375   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.807765   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.808210   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.808226   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.810837   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.810866   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811278   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.811300   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811477   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.811716   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.811753   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.811769   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811856   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.812027   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.812308   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.812494   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.812643   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.812795   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.824082   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I0520 11:52:28.824580   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.825117   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.825144   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.825492   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.825709   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.827343   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.827545   57519 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:52:28.827561   57519 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:52:28.827574   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.830623   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.831261   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.831287   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.831485   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.831625   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.831783   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.831912   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.984120   57519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:52:29.006910   57519 node_ready.go:35] waiting up to 6m0s for node "no-preload-814599" to be "Ready" ...
	I0520 11:52:29.016171   57519 node_ready.go:49] node "no-preload-814599" has status "Ready":"True"
	I0520 11:52:29.016197   57519 node_ready.go:38] duration metric: took 9.258398ms for node "no-preload-814599" to be "Ready" ...
	I0520 11:52:29.016207   57519 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:52:29.024085   57519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:29.092559   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:52:29.094377   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:52:29.094395   57519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:52:29.112741   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:52:29.133783   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:52:29.133804   57519 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:52:29.202727   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:52:29.202756   57519 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:52:29.347632   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:52:30.143576   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050979242s)
	I0520 11:52:30.143633   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.143645   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.143652   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.030877915s)
	I0520 11:52:30.143694   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.143706   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.143986   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.143960   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.144014   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144026   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.144037   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.144040   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144053   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.144061   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.144068   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.144045   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.144326   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144345   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.146044   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.146057   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.146068   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.182464   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.182492   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.182802   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.182817   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.557632   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.209948791s)
	I0520 11:52:30.557698   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.557715   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.557972   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.557992   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.557996   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.558011   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.558022   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.558252   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.558267   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.558277   57519 addons.go:470] Verifying addon metrics-server=true in "no-preload-814599"
	I0520 11:52:30.558302   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.560471   57519 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0520 11:52:30.561889   57519 addons.go:505] duration metric: took 1.801104362s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0520 11:52:31.030297   57519 pod_ready.go:102] pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace has status "Ready":"False"
	I0520 11:52:31.538984   57519 pod_ready.go:92] pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.539002   57519 pod_ready.go:81] duration metric: took 2.514888376s for pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.539011   57519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.544069   57519 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.544089   57519 pod_ready.go:81] duration metric: took 5.072004ms for pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.544097   57519 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.549299   57519 pod_ready.go:92] pod "etcd-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.549315   57519 pod_ready.go:81] duration metric: took 5.212738ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.549323   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.555210   57519 pod_ready.go:92] pod "kube-apiserver-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.555229   57519 pod_ready.go:81] duration metric: took 5.899536ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.555263   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.560438   57519 pod_ready.go:92] pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.560454   57519 pod_ready.go:81] duration metric: took 5.183454ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.560463   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wkfkc" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.928261   57519 pod_ready.go:92] pod "kube-proxy-wkfkc" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.928280   57519 pod_ready.go:81] duration metric: took 367.811223ms for pod "kube-proxy-wkfkc" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.928290   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:32.328564   57519 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:32.328589   57519 pod_ready.go:81] duration metric: took 400.292676ms for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:32.328598   57519 pod_ready.go:38] duration metric: took 3.31238177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:52:32.328612   57519 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:52:32.328672   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:52:32.346546   57519 api_server.go:72] duration metric: took 3.585793393s to wait for apiserver process to appear ...
	I0520 11:52:32.346575   57519 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:52:32.346598   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:52:32.351109   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:52:32.352044   57519 api_server.go:141] control plane version: v1.30.1
	I0520 11:52:32.352070   57519 api_server.go:131] duration metric: took 5.485989ms to wait for apiserver health ...
	I0520 11:52:32.352080   57519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:52:32.532017   57519 system_pods.go:59] 9 kube-system pods found
	I0520 11:52:32.532048   57519 system_pods.go:61] "coredns-7db6d8ff4d-8qdbk" [98bbe29a-5bd1-4568-ac29-5bf030c38f79] Running
	I0520 11:52:32.532053   57519 system_pods.go:61] "coredns-7db6d8ff4d-kbz7d" [d36efcc0-4f91-4c5d-8781-0bb5f22187ab] Running
	I0520 11:52:32.532056   57519 system_pods.go:61] "etcd-no-preload-814599" [d0685317-5464-4eb1-a81f-eca1a644f86a] Running
	I0520 11:52:32.532060   57519 system_pods.go:61] "kube-apiserver-no-preload-814599" [4a6f40fe-3c8f-4afc-a731-ccb80c5840ed] Running
	I0520 11:52:32.532064   57519 system_pods.go:61] "kube-controller-manager-no-preload-814599" [b6f3ac72-bd6f-421d-85f6-905a362be527] Running
	I0520 11:52:32.532067   57519 system_pods.go:61] "kube-proxy-wkfkc" [679b7143-7bff-47f3-ab69-cd87eec68013] Running
	I0520 11:52:32.532071   57519 system_pods.go:61] "kube-scheduler-no-preload-814599" [0e7d1cb0-2e44-4ae5-b7fd-e68f88f7fc04] Running
	I0520 11:52:32.532077   57519 system_pods.go:61] "metrics-server-569cc877fc-lggvs" [72eeb6ee-27ce-4485-b364-5c9b87283d29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:52:32.532081   57519 system_pods.go:61] "storage-provisioner" [1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b] Running
	I0520 11:52:32.532094   57519 system_pods.go:74] duration metric: took 180.008393ms to wait for pod list to return data ...
	I0520 11:52:32.532103   57519 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:52:32.729233   57519 default_sa.go:45] found service account: "default"
	I0520 11:52:32.729264   57519 default_sa.go:55] duration metric: took 197.154237ms for default service account to be created ...
	I0520 11:52:32.729278   57519 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:52:32.932686   57519 system_pods.go:86] 9 kube-system pods found
	I0520 11:52:32.932717   57519 system_pods.go:89] "coredns-7db6d8ff4d-8qdbk" [98bbe29a-5bd1-4568-ac29-5bf030c38f79] Running
	I0520 11:52:32.932764   57519 system_pods.go:89] "coredns-7db6d8ff4d-kbz7d" [d36efcc0-4f91-4c5d-8781-0bb5f22187ab] Running
	I0520 11:52:32.932777   57519 system_pods.go:89] "etcd-no-preload-814599" [d0685317-5464-4eb1-a81f-eca1a644f86a] Running
	I0520 11:52:32.932786   57519 system_pods.go:89] "kube-apiserver-no-preload-814599" [4a6f40fe-3c8f-4afc-a731-ccb80c5840ed] Running
	I0520 11:52:32.932796   57519 system_pods.go:89] "kube-controller-manager-no-preload-814599" [b6f3ac72-bd6f-421d-85f6-905a362be527] Running
	I0520 11:52:32.932805   57519 system_pods.go:89] "kube-proxy-wkfkc" [679b7143-7bff-47f3-ab69-cd87eec68013] Running
	I0520 11:52:32.932812   57519 system_pods.go:89] "kube-scheduler-no-preload-814599" [0e7d1cb0-2e44-4ae5-b7fd-e68f88f7fc04] Running
	I0520 11:52:32.932824   57519 system_pods.go:89] "metrics-server-569cc877fc-lggvs" [72eeb6ee-27ce-4485-b364-5c9b87283d29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:52:32.932835   57519 system_pods.go:89] "storage-provisioner" [1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b] Running
	I0520 11:52:32.932854   57519 system_pods.go:126] duration metric: took 203.569018ms to wait for k8s-apps to be running ...
	I0520 11:52:32.932867   57519 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:52:32.932945   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:32.948983   57519 system_svc.go:56] duration metric: took 16.10841ms WaitForService to wait for kubelet
	I0520 11:52:32.949014   57519 kubeadm.go:576] duration metric: took 4.188266242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:52:32.949035   57519 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:52:33.128478   57519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:52:33.128503   57519 node_conditions.go:123] node cpu capacity is 2
	I0520 11:52:33.128514   57519 node_conditions.go:105] duration metric: took 179.473758ms to run NodePressure ...
	I0520 11:52:33.128527   57519 start.go:240] waiting for startup goroutines ...
	I0520 11:52:33.128536   57519 start.go:245] waiting for cluster config update ...
	I0520 11:52:33.128548   57519 start.go:254] writing updated cluster config ...
	I0520 11:52:33.128860   57519 ssh_runner.go:195] Run: rm -f paused
	I0520 11:52:33.177360   57519 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:52:33.179409   57519 out.go:177] * Done! kubectl is now configured to use "no-preload-814599" cluster and "default" namespace by default
	I0520 11:52:52.328753   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:52:52.329026   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:52.329268   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:57.329722   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:57.329960   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:07.330334   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:07.330605   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:27.331241   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:27.331509   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330498   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:54:07.330756   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330768   58316 kubeadm.go:309] 
	I0520 11:54:07.330813   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:54:07.330868   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:54:07.330878   58316 kubeadm.go:309] 
	I0520 11:54:07.330924   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:54:07.330980   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:54:07.331142   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:54:07.331178   58316 kubeadm.go:309] 
	I0520 11:54:07.331298   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:54:07.331354   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:54:07.331409   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:54:07.331419   58316 kubeadm.go:309] 
	I0520 11:54:07.331564   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:54:07.331675   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:54:07.331685   58316 kubeadm.go:309] 
	I0520 11:54:07.331842   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:54:07.331971   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:54:07.332096   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:54:07.332190   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:54:07.332204   58316 kubeadm.go:309] 
	I0520 11:54:07.332486   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:54:07.332595   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:54:07.332685   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 11:54:07.332755   58316 kubeadm.go:393] duration metric: took 7m58.459127604s to StartCluster
	I0520 11:54:07.332796   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:54:07.332870   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:54:07.378602   58316 cri.go:89] found id: ""
	I0520 11:54:07.378631   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.378641   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:54:07.378648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:54:07.378714   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:54:07.415695   58316 cri.go:89] found id: ""
	I0520 11:54:07.415724   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.415732   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:54:07.415739   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:54:07.415802   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:54:07.456681   58316 cri.go:89] found id: ""
	I0520 11:54:07.456706   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.456717   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:54:07.456726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:54:07.456781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:54:07.495834   58316 cri.go:89] found id: ""
	I0520 11:54:07.495870   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.495881   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:54:07.495889   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:54:07.495949   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:54:07.533674   58316 cri.go:89] found id: ""
	I0520 11:54:07.533698   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.533705   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:54:07.533710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:54:07.533759   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:54:07.571118   58316 cri.go:89] found id: ""
	I0520 11:54:07.571141   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.571148   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:54:07.571155   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:54:07.571214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:54:07.609066   58316 cri.go:89] found id: ""
	I0520 11:54:07.609091   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.609105   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:54:07.609114   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:54:07.609176   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:54:07.653029   58316 cri.go:89] found id: ""
	I0520 11:54:07.653057   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.653068   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:54:07.653080   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:54:07.653095   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:54:07.707497   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:54:07.707533   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:54:07.728740   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:54:07.728772   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:54:07.809062   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:54:07.809088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:54:07.809104   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:54:07.911717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:54:07.911762   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0520 11:54:07.952747   58316 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0520 11:54:07.952792   58316 out.go:239] * 
	W0520 11:54:07.952855   58316 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.952881   58316 out.go:239] * 
	W0520 11:54:07.954044   58316 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:54:07.957368   58316 out.go:177] 
	W0520 11:54:07.958433   58316 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.958491   58316 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0520 11:54:07.958518   58316 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0520 11:54:07.959662   58316 out.go:177] 
	
	
	==> CRI-O <==
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.188345686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206593188313966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=235aca57-3148-474a-8169-76e6b093d59c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.189237028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0ee154c-54ca-4394-b8d0-5e76daccebc2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.189325718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0ee154c-54ca-4394-b8d0-5e76daccebc2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.189361101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d0ee154c-54ca-4394-b8d0-5e76daccebc2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.221941304Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3fd779c-a974-43e6-bb1d-e1a12bb4c531 name=/runtime.v1.RuntimeService/Version
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.222013638Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3fd779c-a974-43e6-bb1d-e1a12bb4c531 name=/runtime.v1.RuntimeService/Version
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.223079602Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3513b665-0bca-4251-861f-e497e68ce5bd name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.223500256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206593223477225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3513b665-0bca-4251-861f-e497e68ce5bd name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.224040813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d993ec7-e10e-4729-8630-1c923c0a29c9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.224090334Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d993ec7-e10e-4729-8630-1c923c0a29c9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.224127980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6d993ec7-e10e-4729-8630-1c923c0a29c9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.257668316Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40fd28d9-d519-462e-a2eb-96a9d53cf2a4 name=/runtime.v1.RuntimeService/Version
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.257744681Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40fd28d9-d519-462e-a2eb-96a9d53cf2a4 name=/runtime.v1.RuntimeService/Version
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.258989326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7a46fe7-d30c-4b27-8f7a-d728420273de name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.259441235Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206593259415765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7a46fe7-d30c-4b27-8f7a-d728420273de name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.260253774Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a2a5f1b-82e5-4900-930a-490c7f4993ca name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.260315532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a2a5f1b-82e5-4900-930a-490c7f4993ca name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.260360698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0a2a5f1b-82e5-4900-930a-490c7f4993ca name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.292577862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbb985fa-591d-4310-b8de-e2ec47df642e name=/runtime.v1.RuntimeService/Version
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.292702619Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbb985fa-591d-4310-b8de-e2ec47df642e name=/runtime.v1.RuntimeService/Version
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.294019708Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68a7698a-4abe-42c0-94f4-b46f834231eb name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.294619126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206593294585575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68a7698a-4abe-42c0-94f4-b46f834231eb name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.295317542Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b7e5319-0f04-42b3-a6fb-6170c0ccce18 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.295374352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b7e5319-0f04-42b3-a6fb-6170c0ccce18 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:03:13 old-k8s-version-210420 crio[654]: time="2024-05-20 12:03:13.295409195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9b7e5319-0f04-42b3-a6fb-6170c0ccce18 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May20 11:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051778] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041197] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.512371] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.398190] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.611528] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 11:46] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.055467] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069914] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.180624] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.154603] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.255744] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.419712] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.061046] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.817677] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[ +12.205049] kauditd_printk_skb: 46 callbacks suppressed
	[May20 11:50] systemd-fstab-generator[5032]: Ignoring "noauto" option for root device
	[May20 11:52] systemd-fstab-generator[5321]: Ignoring "noauto" option for root device
	[  +0.070587] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:03:13 up 17 min,  0 users,  load average: 0.00, 0.03, 0.04
	Linux old-k8s-version-210420 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000cd6540, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000ce00c0, 0x24, 0x0, ...)
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]:         /usr/local/go/src/net/dial.go:221 +0x47d
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]: net.(*Dialer).DialContext(0xc000bb4de0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000ce00c0, 0x24, 0x0, 0x0, 0x0, ...)
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]:         /usr/local/go/src/net/dial.go:403 +0x22b
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bbc280, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000ce00c0, 0x24, 0x60, 0x7f61534fafd0, 0x118, ...)
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]: net/http.(*Transport).dial(0xc0005fc280, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000ce00c0, 0x24, 0x0, 0x0, 0x0, ...)
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]: net/http.(*Transport).dialConn(0xc0005fc280, 0x4f7fe00, 0xc000120018, 0x0, 0xc000b343c0, 0x5, 0xc000ce00c0, 0x24, 0x0, 0xc000c12ea0, ...)
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]: net/http.(*Transport).dialConnFor(0xc0005fc280, 0xc000c10580)
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]: created by net/http.(*Transport).queueForDial
	May 20 12:03:08 old-k8s-version-210420 kubelet[6503]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	May 20 12:03:08 old-k8s-version-210420 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 20 12:03:08 old-k8s-version-210420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 20 12:03:08 old-k8s-version-210420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	May 20 12:03:08 old-k8s-version-210420 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 20 12:03:08 old-k8s-version-210420 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 20 12:03:08 old-k8s-version-210420 kubelet[6513]: I0520 12:03:08.952867    6513 server.go:416] Version: v1.20.0
	May 20 12:03:08 old-k8s-version-210420 kubelet[6513]: I0520 12:03:08.953333    6513 server.go:837] Client rotation is on, will bootstrap in background
	May 20 12:03:08 old-k8s-version-210420 kubelet[6513]: I0520 12:03:08.955649    6513 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 20 12:03:08 old-k8s-version-210420 kubelet[6513]: W0520 12:03:08.956785    6513 manager.go:159] Cannot detect current cgroup on cgroup v2
	May 20 12:03:08 old-k8s-version-210420 kubelet[6513]: I0520 12:03:08.957126    6513 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210420 -n old-k8s-version-210420
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 2 (229.785202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-210420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (538.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-274976 -n embed-certs-274976
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-05-20 12:09:00.751417004 +0000 UTC m=+6510.463299841
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-274976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-274976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.352µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-274976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274976 -n embed-certs-274976
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-274976 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-274976 logs -n 25: (1.610724216s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-111176             | newest-cni-111176 | jenkins | v1.33.1 | 20 May 24 12:07 UTC | 20 May 24 12:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                   |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                   |         |         |                     |                     |
	| stop    | -p newest-cni-111176                                   | newest-cni-111176 | jenkins | v1.33.1 | 20 May 24 12:07 UTC | 20 May 24 12:07 UTC |
	|         | --alsologtostderr -v=3                                 |                   |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-111176                  | newest-cni-111176 | jenkins | v1.33.1 | 20 May 24 12:07 UTC | 20 May 24 12:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                   |         |         |                     |                     |
	| start   | -p newest-cni-111176 --memory=2200 --alsologtostderr   | newest-cni-111176 | jenkins | v1.33.1 | 20 May 24 12:07 UTC | 20 May 24 12:08 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                   |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                   |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                   |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                   |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                   |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                   |         |         |                     |                     |
	| image   | newest-cni-111176 image list                           | newest-cni-111176 | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	|         | --format=json                                          |                   |         |         |                     |                     |
	| pause   | -p newest-cni-111176                                   | newest-cni-111176 | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	|         | --alsologtostderr -v=1                                 |                   |         |         |                     |                     |
	| unpause | -p newest-cni-111176                                   | newest-cni-111176 | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	|         | --alsologtostderr -v=1                                 |                   |         |         |                     |                     |
	| delete  | -p newest-cni-111176                                   | newest-cni-111176 | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	| delete  | -p newest-cni-111176                                   | newest-cni-111176 | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	| start   | -p kindnet-893050                                      | kindnet-893050    | jenkins | v1.33.1 | 20 May 24 12:08 UTC |                     |
	|         | --memory=3072                                          |                   |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                   |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                   |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                   |         |         |                     |                     |
	|         | --container-runtime=crio                               |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 pgrep -a                                | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	|         | kubelet                                                |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo cat                                | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	|         | /etc/nsswitch.conf                                     |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo cat                                | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	|         | /etc/hosts                                             |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo cat                                | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	|         | /etc/resolv.conf                                       |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo crictl                             | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	|         | pods                                                   |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo crictl ps                          | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	|         | --all                                                  |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo find                               | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	|         | /etc/cni -type f -exec sh -c                           |                   |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo ip a s                             | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	| ssh     | -p auto-893050 sudo ip r s                             | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	| ssh     | -p auto-893050 sudo                                    | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	|         | iptables-save                                          |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo iptables                           | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:08 UTC | 20 May 24 12:08 UTC |
	|         | -t nat -L -n -v                                        |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo systemctl                          | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | status kubelet --all --full                            |                   |         |         |                     |                     |
	|         | --no-pager                                             |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo systemctl                          | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | cat kubelet --no-pager                                 |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo journalctl                         | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | -xeu kubelet --all --full                              |                   |         |         |                     |                     |
	|         | --no-pager                                             |                   |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo cat                                | auto-893050       | jenkins | v1.33.1 | 20 May 24 12:09 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                           |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 12:08:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 12:08:31.308616   66550 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:08:31.308747   66550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:08:31.308757   66550 out.go:304] Setting ErrFile to fd 2...
	I0520 12:08:31.308763   66550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:08:31.308985   66550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 12:08:31.309580   66550 out.go:298] Setting JSON to false
	I0520 12:08:31.310616   66550 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6654,"bootTime":1716200257,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:08:31.310678   66550 start.go:139] virtualization: kvm guest
	I0520 12:08:31.312812   66550 out.go:177] * [kindnet-893050] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:08:31.314016   66550 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 12:08:31.314043   66550 notify.go:220] Checking for updates...
	I0520 12:08:31.315209   66550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:08:31.316454   66550 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 12:08:31.317685   66550 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 12:08:31.318993   66550 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 12:08:31.320197   66550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 12:08:31.321844   66550 config.go:182] Loaded profile config "auto-893050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:08:31.321967   66550 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:08:31.322074   66550 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:08:31.322192   66550 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:08:31.360758   66550 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 12:08:31.362115   66550 start.go:297] selected driver: kvm2
	I0520 12:08:31.362133   66550 start.go:901] validating driver "kvm2" against <nil>
	I0520 12:08:31.362163   66550 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 12:08:31.363107   66550 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:08:31.363187   66550 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 12:08:31.378005   66550 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 12:08:31.378060   66550 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 12:08:31.378321   66550 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:08:31.378359   66550 cni.go:84] Creating CNI manager for "kindnet"
	I0520 12:08:31.378367   66550 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0520 12:08:31.378426   66550 start.go:340] cluster config:
	{Name:kindnet-893050 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kindnet-893050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:08:31.378572   66550 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:08:31.380342   66550 out.go:177] * Starting "kindnet-893050" primary control-plane node in "kindnet-893050" cluster
	I0520 12:08:31.381514   66550 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:08:31.381548   66550 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 12:08:31.381559   66550 cache.go:56] Caching tarball of preloaded images
	I0520 12:08:31.381643   66550 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:08:31.381656   66550 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:08:31.381775   66550 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kindnet-893050/config.json ...
	I0520 12:08:31.381796   66550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kindnet-893050/config.json: {Name:mke5030dc1bb86a774ea74673cd0bb0c85f6a7c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:08:31.381943   66550 start.go:360] acquireMachinesLock for kindnet-893050: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:08:31.381973   66550 start.go:364] duration metric: took 16.154µs to acquireMachinesLock for "kindnet-893050"
	I0520 12:08:31.381994   66550 start.go:93] Provisioning new machine with config: &{Name:kindnet-893050 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:kindnet-893050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:08:31.382067   66550 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 12:08:31.693301   65323 pod_ready.go:102] pod "coredns-7db6d8ff4d-x658x" in "kube-system" namespace has status "Ready":"False"
	I0520 12:08:34.193018   65323 pod_ready.go:102] pod "coredns-7db6d8ff4d-x658x" in "kube-system" namespace has status "Ready":"False"
	I0520 12:08:31.383537   66550 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 12:08:31.383657   66550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:08:31.383695   66550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:08:31.398947   66550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43363
	I0520 12:08:31.399405   66550 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:08:31.399950   66550 main.go:141] libmachine: Using API Version  1
	I0520 12:08:31.399983   66550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:08:31.400310   66550 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:08:31.400537   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetMachineName
	I0520 12:08:31.400746   66550 main.go:141] libmachine: (kindnet-893050) Calling .DriverName
	I0520 12:08:31.400960   66550 start.go:159] libmachine.API.Create for "kindnet-893050" (driver="kvm2")
	I0520 12:08:31.400994   66550 client.go:168] LocalClient.Create starting
	I0520 12:08:31.401029   66550 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 12:08:31.401071   66550 main.go:141] libmachine: Decoding PEM data...
	I0520 12:08:31.401095   66550 main.go:141] libmachine: Parsing certificate...
	I0520 12:08:31.401171   66550 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 12:08:31.401199   66550 main.go:141] libmachine: Decoding PEM data...
	I0520 12:08:31.401220   66550 main.go:141] libmachine: Parsing certificate...
	I0520 12:08:31.401245   66550 main.go:141] libmachine: Running pre-create checks...
	I0520 12:08:31.401269   66550 main.go:141] libmachine: (kindnet-893050) Calling .PreCreateCheck
	I0520 12:08:31.401648   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetConfigRaw
	I0520 12:08:31.402104   66550 main.go:141] libmachine: Creating machine...
	I0520 12:08:31.402121   66550 main.go:141] libmachine: (kindnet-893050) Calling .Create
	I0520 12:08:31.402248   66550 main.go:141] libmachine: (kindnet-893050) Creating KVM machine...
	I0520 12:08:31.403493   66550 main.go:141] libmachine: (kindnet-893050) DBG | found existing default KVM network
	I0520 12:08:31.404982   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:31.404789   66573 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:70:e0:e9} reservation:<nil>}
	I0520 12:08:31.405780   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:31.405724   66573 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:d6:3c:1b} reservation:<nil>}
	I0520 12:08:31.406576   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:31.406491   66573 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:f1:08:22} reservation:<nil>}
	I0520 12:08:31.407542   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:31.407461   66573 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a38a0}
	I0520 12:08:31.407567   66550 main.go:141] libmachine: (kindnet-893050) DBG | created network xml: 
	I0520 12:08:31.407616   66550 main.go:141] libmachine: (kindnet-893050) DBG | <network>
	I0520 12:08:31.407641   66550 main.go:141] libmachine: (kindnet-893050) DBG |   <name>mk-kindnet-893050</name>
	I0520 12:08:31.407664   66550 main.go:141] libmachine: (kindnet-893050) DBG |   <dns enable='no'/>
	I0520 12:08:31.407684   66550 main.go:141] libmachine: (kindnet-893050) DBG |   
	I0520 12:08:31.407696   66550 main.go:141] libmachine: (kindnet-893050) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0520 12:08:31.407718   66550 main.go:141] libmachine: (kindnet-893050) DBG |     <dhcp>
	I0520 12:08:31.407758   66550 main.go:141] libmachine: (kindnet-893050) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0520 12:08:31.407781   66550 main.go:141] libmachine: (kindnet-893050) DBG |     </dhcp>
	I0520 12:08:31.407800   66550 main.go:141] libmachine: (kindnet-893050) DBG |   </ip>
	I0520 12:08:31.407818   66550 main.go:141] libmachine: (kindnet-893050) DBG |   
	I0520 12:08:31.407829   66550 main.go:141] libmachine: (kindnet-893050) DBG | </network>
	I0520 12:08:31.407840   66550 main.go:141] libmachine: (kindnet-893050) DBG | 
	I0520 12:08:31.412684   66550 main.go:141] libmachine: (kindnet-893050) DBG | trying to create private KVM network mk-kindnet-893050 192.168.72.0/24...
	I0520 12:08:31.485072   66550 main.go:141] libmachine: (kindnet-893050) DBG | private KVM network mk-kindnet-893050 192.168.72.0/24 created
	I0520 12:08:31.485107   66550 main.go:141] libmachine: (kindnet-893050) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050 ...
	I0520 12:08:31.485138   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:31.482592   66573 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 12:08:31.485152   66550 main.go:141] libmachine: (kindnet-893050) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:08:31.485177   66550 main.go:141] libmachine: (kindnet-893050) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:08:31.704527   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:31.704405   66573 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050/id_rsa...
	I0520 12:08:31.851290   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:31.851152   66573 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050/kindnet-893050.rawdisk...
	I0520 12:08:31.851316   66550 main.go:141] libmachine: (kindnet-893050) DBG | Writing magic tar header
	I0520 12:08:31.851337   66550 main.go:141] libmachine: (kindnet-893050) DBG | Writing SSH key tar header
	I0520 12:08:31.851350   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:31.851271   66573 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050 ...
	I0520 12:08:31.851370   66550 main.go:141] libmachine: (kindnet-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050
	I0520 12:08:31.851432   66550 main.go:141] libmachine: (kindnet-893050) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050 (perms=drwx------)
	I0520 12:08:31.851451   66550 main.go:141] libmachine: (kindnet-893050) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:08:31.851458   66550 main.go:141] libmachine: (kindnet-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 12:08:31.851499   66550 main.go:141] libmachine: (kindnet-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 12:08:31.851531   66550 main.go:141] libmachine: (kindnet-893050) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 12:08:31.851546   66550 main.go:141] libmachine: (kindnet-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 12:08:31.851561   66550 main.go:141] libmachine: (kindnet-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:08:31.851572   66550 main.go:141] libmachine: (kindnet-893050) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:08:31.851585   66550 main.go:141] libmachine: (kindnet-893050) DBG | Checking permissions on dir: /home
	I0520 12:08:31.851595   66550 main.go:141] libmachine: (kindnet-893050) DBG | Skipping /home - not owner
	I0520 12:08:31.851617   66550 main.go:141] libmachine: (kindnet-893050) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 12:08:31.851649   66550 main.go:141] libmachine: (kindnet-893050) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:08:31.851672   66550 main.go:141] libmachine: (kindnet-893050) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:08:31.851689   66550 main.go:141] libmachine: (kindnet-893050) Creating domain...
	I0520 12:08:31.852630   66550 main.go:141] libmachine: (kindnet-893050) define libvirt domain using xml: 
	I0520 12:08:31.852654   66550 main.go:141] libmachine: (kindnet-893050) <domain type='kvm'>
	I0520 12:08:31.852664   66550 main.go:141] libmachine: (kindnet-893050)   <name>kindnet-893050</name>
	I0520 12:08:31.852672   66550 main.go:141] libmachine: (kindnet-893050)   <memory unit='MiB'>3072</memory>
	I0520 12:08:31.852681   66550 main.go:141] libmachine: (kindnet-893050)   <vcpu>2</vcpu>
	I0520 12:08:31.852690   66550 main.go:141] libmachine: (kindnet-893050)   <features>
	I0520 12:08:31.852717   66550 main.go:141] libmachine: (kindnet-893050)     <acpi/>
	I0520 12:08:31.852733   66550 main.go:141] libmachine: (kindnet-893050)     <apic/>
	I0520 12:08:31.852742   66550 main.go:141] libmachine: (kindnet-893050)     <pae/>
	I0520 12:08:31.852753   66550 main.go:141] libmachine: (kindnet-893050)     
	I0520 12:08:31.852765   66550 main.go:141] libmachine: (kindnet-893050)   </features>
	I0520 12:08:31.852774   66550 main.go:141] libmachine: (kindnet-893050)   <cpu mode='host-passthrough'>
	I0520 12:08:31.852785   66550 main.go:141] libmachine: (kindnet-893050)   
	I0520 12:08:31.852799   66550 main.go:141] libmachine: (kindnet-893050)   </cpu>
	I0520 12:08:31.852828   66550 main.go:141] libmachine: (kindnet-893050)   <os>
	I0520 12:08:31.852858   66550 main.go:141] libmachine: (kindnet-893050)     <type>hvm</type>
	I0520 12:08:31.852889   66550 main.go:141] libmachine: (kindnet-893050)     <boot dev='cdrom'/>
	I0520 12:08:31.852945   66550 main.go:141] libmachine: (kindnet-893050)     <boot dev='hd'/>
	I0520 12:08:31.852957   66550 main.go:141] libmachine: (kindnet-893050)     <bootmenu enable='no'/>
	I0520 12:08:31.852967   66550 main.go:141] libmachine: (kindnet-893050)   </os>
	I0520 12:08:31.852978   66550 main.go:141] libmachine: (kindnet-893050)   <devices>
	I0520 12:08:31.852991   66550 main.go:141] libmachine: (kindnet-893050)     <disk type='file' device='cdrom'>
	I0520 12:08:31.853005   66550 main.go:141] libmachine: (kindnet-893050)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050/boot2docker.iso'/>
	I0520 12:08:31.853018   66550 main.go:141] libmachine: (kindnet-893050)       <target dev='hdc' bus='scsi'/>
	I0520 12:08:31.853026   66550 main.go:141] libmachine: (kindnet-893050)       <readonly/>
	I0520 12:08:31.853042   66550 main.go:141] libmachine: (kindnet-893050)     </disk>
	I0520 12:08:31.853057   66550 main.go:141] libmachine: (kindnet-893050)     <disk type='file' device='disk'>
	I0520 12:08:31.853070   66550 main.go:141] libmachine: (kindnet-893050)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:08:31.853091   66550 main.go:141] libmachine: (kindnet-893050)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050/kindnet-893050.rawdisk'/>
	I0520 12:08:31.853103   66550 main.go:141] libmachine: (kindnet-893050)       <target dev='hda' bus='virtio'/>
	I0520 12:08:31.853133   66550 main.go:141] libmachine: (kindnet-893050)     </disk>
	I0520 12:08:31.853151   66550 main.go:141] libmachine: (kindnet-893050)     <interface type='network'>
	I0520 12:08:31.853164   66550 main.go:141] libmachine: (kindnet-893050)       <source network='mk-kindnet-893050'/>
	I0520 12:08:31.853175   66550 main.go:141] libmachine: (kindnet-893050)       <model type='virtio'/>
	I0520 12:08:31.853184   66550 main.go:141] libmachine: (kindnet-893050)     </interface>
	I0520 12:08:31.853199   66550 main.go:141] libmachine: (kindnet-893050)     <interface type='network'>
	I0520 12:08:31.853224   66550 main.go:141] libmachine: (kindnet-893050)       <source network='default'/>
	I0520 12:08:31.853240   66550 main.go:141] libmachine: (kindnet-893050)       <model type='virtio'/>
	I0520 12:08:31.853250   66550 main.go:141] libmachine: (kindnet-893050)     </interface>
	I0520 12:08:31.853260   66550 main.go:141] libmachine: (kindnet-893050)     <serial type='pty'>
	I0520 12:08:31.853271   66550 main.go:141] libmachine: (kindnet-893050)       <target port='0'/>
	I0520 12:08:31.853278   66550 main.go:141] libmachine: (kindnet-893050)     </serial>
	I0520 12:08:31.853284   66550 main.go:141] libmachine: (kindnet-893050)     <console type='pty'>
	I0520 12:08:31.853292   66550 main.go:141] libmachine: (kindnet-893050)       <target type='serial' port='0'/>
	I0520 12:08:31.853297   66550 main.go:141] libmachine: (kindnet-893050)     </console>
	I0520 12:08:31.853304   66550 main.go:141] libmachine: (kindnet-893050)     <rng model='virtio'>
	I0520 12:08:31.853317   66550 main.go:141] libmachine: (kindnet-893050)       <backend model='random'>/dev/random</backend>
	I0520 12:08:31.853329   66550 main.go:141] libmachine: (kindnet-893050)     </rng>
	I0520 12:08:31.853337   66550 main.go:141] libmachine: (kindnet-893050)     
	I0520 12:08:31.853343   66550 main.go:141] libmachine: (kindnet-893050)     
	I0520 12:08:31.853352   66550 main.go:141] libmachine: (kindnet-893050)   </devices>
	I0520 12:08:31.853363   66550 main.go:141] libmachine: (kindnet-893050) </domain>
	I0520 12:08:31.853375   66550 main.go:141] libmachine: (kindnet-893050) 
	I0520 12:08:31.857212   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:bd:c1:3b in network default
	I0520 12:08:31.857846   66550 main.go:141] libmachine: (kindnet-893050) Ensuring networks are active...
	I0520 12:08:31.857885   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:31.858532   66550 main.go:141] libmachine: (kindnet-893050) Ensuring network default is active
	I0520 12:08:31.858919   66550 main.go:141] libmachine: (kindnet-893050) Ensuring network mk-kindnet-893050 is active
	I0520 12:08:31.859389   66550 main.go:141] libmachine: (kindnet-893050) Getting domain xml...
	I0520 12:08:31.860137   66550 main.go:141] libmachine: (kindnet-893050) Creating domain...
	I0520 12:08:33.137745   66550 main.go:141] libmachine: (kindnet-893050) Waiting to get IP...
	I0520 12:08:33.138545   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:33.139002   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:33.139027   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:33.138992   66573 retry.go:31] will retry after 252.955344ms: waiting for machine to come up
	I0520 12:08:33.393666   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:33.394249   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:33.394281   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:33.394203   66573 retry.go:31] will retry after 285.509735ms: waiting for machine to come up
	I0520 12:08:33.681701   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:33.682280   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:33.682306   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:33.682236   66573 retry.go:31] will retry after 405.304355ms: waiting for machine to come up
	I0520 12:08:34.088819   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:34.089312   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:34.089341   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:34.089277   66573 retry.go:31] will retry after 447.238573ms: waiting for machine to come up
	I0520 12:08:34.537891   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:34.538387   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:34.538415   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:34.538346   66573 retry.go:31] will retry after 758.666119ms: waiting for machine to come up
	I0520 12:08:35.298201   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:35.298732   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:35.298783   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:35.298664   66573 retry.go:31] will retry after 888.62359ms: waiting for machine to come up
	I0520 12:08:36.188615   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:36.189109   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:36.189138   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:36.189066   66573 retry.go:31] will retry after 983.519667ms: waiting for machine to come up
	I0520 12:08:36.193206   65323 pod_ready.go:102] pod "coredns-7db6d8ff4d-x658x" in "kube-system" namespace has status "Ready":"False"
	I0520 12:08:38.191847   65323 pod_ready.go:92] pod "coredns-7db6d8ff4d-x658x" in "kube-system" namespace has status "Ready":"True"
	I0520 12:08:38.191879   65323 pod_ready.go:81] duration metric: took 29.006558647s for pod "coredns-7db6d8ff4d-x658x" in "kube-system" namespace to be "Ready" ...
	I0520 12:08:38.191893   65323 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-893050" in "kube-system" namespace to be "Ready" ...
	I0520 12:08:38.203303   65323 pod_ready.go:92] pod "etcd-auto-893050" in "kube-system" namespace has status "Ready":"True"
	I0520 12:08:38.203326   65323 pod_ready.go:81] duration metric: took 11.427249ms for pod "etcd-auto-893050" in "kube-system" namespace to be "Ready" ...
	I0520 12:08:38.203336   65323 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-893050" in "kube-system" namespace to be "Ready" ...
	I0520 12:08:38.208465   65323 pod_ready.go:92] pod "kube-apiserver-auto-893050" in "kube-system" namespace has status "Ready":"True"
	I0520 12:08:38.208486   65323 pod_ready.go:81] duration metric: took 5.143234ms for pod "kube-apiserver-auto-893050" in "kube-system" namespace to be "Ready" ...
	I0520 12:08:38.208497   65323 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-893050" in "kube-system" namespace to be "Ready" ...
	I0520 12:08:38.212888   65323 pod_ready.go:92] pod "kube-controller-manager-auto-893050" in "kube-system" namespace has status "Ready":"True"
	I0520 12:08:38.212910   65323 pod_ready.go:81] duration metric: took 4.405364ms for pod "kube-controller-manager-auto-893050" in "kube-system" namespace to be "Ready" ...
	I0520 12:08:38.212936   65323 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-s9zp9" in "kube-system" namespace to be "Ready" ...
	I0520 12:08:38.218458   65323 pod_ready.go:92] pod "kube-proxy-s9zp9" in "kube-system" namespace has status "Ready":"True"
	I0520 12:08:38.218477   65323 pod_ready.go:81] duration metric: took 5.533556ms for pod "kube-proxy-s9zp9" in "kube-system" namespace to be "Ready" ...
	I0520 12:08:38.218485   65323 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-893050" in "kube-system" namespace to be "Ready" ...
	I0520 12:08:38.590486   65323 pod_ready.go:92] pod "kube-scheduler-auto-893050" in "kube-system" namespace has status "Ready":"True"
	I0520 12:08:38.590508   65323 pod_ready.go:81] duration metric: took 372.017161ms for pod "kube-scheduler-auto-893050" in "kube-system" namespace to be "Ready" ...
	I0520 12:08:38.590516   65323 pod_ready.go:38] duration metric: took 40.427528819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 12:08:38.590529   65323 api_server.go:52] waiting for apiserver process to appear ...
	I0520 12:08:38.590594   65323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 12:08:38.605777   65323 api_server.go:72] duration metric: took 41.348856899s to wait for apiserver process to appear ...
	I0520 12:08:38.605801   65323 api_server.go:88] waiting for apiserver healthz status ...
	I0520 12:08:38.605817   65323 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I0520 12:08:38.610089   65323 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I0520 12:08:38.611215   65323 api_server.go:141] control plane version: v1.30.1
	I0520 12:08:38.611239   65323 api_server.go:131] duration metric: took 5.431286ms to wait for apiserver health ...
	I0520 12:08:38.611248   65323 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 12:08:38.792903   65323 system_pods.go:59] 7 kube-system pods found
	I0520 12:08:38.792949   65323 system_pods.go:61] "coredns-7db6d8ff4d-x658x" [f7fbdf32-3be3-49fc-8d05-f09aa827b80a] Running
	I0520 12:08:38.792955   65323 system_pods.go:61] "etcd-auto-893050" [46f6684a-49b5-427b-a23d-32423419916c] Running
	I0520 12:08:38.792961   65323 system_pods.go:61] "kube-apiserver-auto-893050" [895de518-d7b0-4a09-9863-3db31a131988] Running
	I0520 12:08:38.792967   65323 system_pods.go:61] "kube-controller-manager-auto-893050" [7b4a8b9b-62fb-4aaf-9992-992156be5c7b] Running
	I0520 12:08:38.792971   65323 system_pods.go:61] "kube-proxy-s9zp9" [0faaa1cc-706b-481b-9c33-46ca74870138] Running
	I0520 12:08:38.792975   65323 system_pods.go:61] "kube-scheduler-auto-893050" [48101562-03fe-4066-95f1-f2657be62efe] Running
	I0520 12:08:38.792979   65323 system_pods.go:61] "storage-provisioner" [dc17df7b-5823-436d-95c4-95475f604135] Running
	I0520 12:08:38.792986   65323 system_pods.go:74] duration metric: took 181.731189ms to wait for pod list to return data ...
	I0520 12:08:38.792996   65323 default_sa.go:34] waiting for default service account to be created ...
	I0520 12:08:38.989034   65323 default_sa.go:45] found service account: "default"
	I0520 12:08:38.989063   65323 default_sa.go:55] duration metric: took 196.059886ms for default service account to be created ...
	I0520 12:08:38.989075   65323 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 12:08:39.193386   65323 system_pods.go:86] 7 kube-system pods found
	I0520 12:08:39.193413   65323 system_pods.go:89] "coredns-7db6d8ff4d-x658x" [f7fbdf32-3be3-49fc-8d05-f09aa827b80a] Running
	I0520 12:08:39.193418   65323 system_pods.go:89] "etcd-auto-893050" [46f6684a-49b5-427b-a23d-32423419916c] Running
	I0520 12:08:39.193422   65323 system_pods.go:89] "kube-apiserver-auto-893050" [895de518-d7b0-4a09-9863-3db31a131988] Running
	I0520 12:08:39.193426   65323 system_pods.go:89] "kube-controller-manager-auto-893050" [7b4a8b9b-62fb-4aaf-9992-992156be5c7b] Running
	I0520 12:08:39.193431   65323 system_pods.go:89] "kube-proxy-s9zp9" [0faaa1cc-706b-481b-9c33-46ca74870138] Running
	I0520 12:08:39.193434   65323 system_pods.go:89] "kube-scheduler-auto-893050" [48101562-03fe-4066-95f1-f2657be62efe] Running
	I0520 12:08:39.193438   65323 system_pods.go:89] "storage-provisioner" [dc17df7b-5823-436d-95c4-95475f604135] Running
	I0520 12:08:39.193445   65323 system_pods.go:126] duration metric: took 204.364199ms to wait for k8s-apps to be running ...
	I0520 12:08:39.193451   65323 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 12:08:39.193492   65323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 12:08:39.211139   65323 system_svc.go:56] duration metric: took 17.679162ms WaitForService to wait for kubelet
	I0520 12:08:39.211169   65323 kubeadm.go:576] duration metric: took 41.954250472s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:08:39.211191   65323 node_conditions.go:102] verifying NodePressure condition ...
	I0520 12:08:39.389739   65323 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 12:08:39.389767   65323 node_conditions.go:123] node cpu capacity is 2
	I0520 12:08:39.389780   65323 node_conditions.go:105] duration metric: took 178.583451ms to run NodePressure ...
	I0520 12:08:39.389791   65323 start.go:240] waiting for startup goroutines ...
	I0520 12:08:39.389798   65323 start.go:245] waiting for cluster config update ...
	I0520 12:08:39.389807   65323 start.go:254] writing updated cluster config ...
	I0520 12:08:39.390078   65323 ssh_runner.go:195] Run: rm -f paused
	I0520 12:08:39.439264   65323 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 12:08:39.441427   65323 out.go:177] * Done! kubectl is now configured to use "auto-893050" cluster and "default" namespace by default
	I0520 12:08:37.174590   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:37.175096   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:37.175159   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:37.175068   66573 retry.go:31] will retry after 1.341216227s: waiting for machine to come up
	I0520 12:08:38.517924   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:38.518392   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:38.518424   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:38.518335   66573 retry.go:31] will retry after 1.342040057s: waiting for machine to come up
	I0520 12:08:39.862148   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:39.862717   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:39.862751   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:39.862669   66573 retry.go:31] will retry after 2.232520118s: waiting for machine to come up
	I0520 12:08:42.096440   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:42.096984   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:42.097012   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:42.096916   66573 retry.go:31] will retry after 2.227494426s: waiting for machine to come up
	I0520 12:08:44.327160   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:44.327779   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:44.327819   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:44.327712   66573 retry.go:31] will retry after 2.387754227s: waiting for machine to come up
	I0520 12:08:46.716785   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:46.717246   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:46.717273   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:46.717205   66573 retry.go:31] will retry after 3.272252513s: waiting for machine to come up
	I0520 12:08:49.990605   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:49.990996   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find current IP address of domain kindnet-893050 in network mk-kindnet-893050
	I0520 12:08:49.991035   66550 main.go:141] libmachine: (kindnet-893050) DBG | I0520 12:08:49.990970   66573 retry.go:31] will retry after 4.465791209s: waiting for machine to come up
	I0520 12:08:54.458773   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:54.459334   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has current primary IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:54.459354   66550 main.go:141] libmachine: (kindnet-893050) Found IP for machine: 192.168.72.206
	I0520 12:08:54.459367   66550 main.go:141] libmachine: (kindnet-893050) Reserving static IP address...
	I0520 12:08:54.459725   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find host DHCP lease matching {name: "kindnet-893050", mac: "52:54:00:8d:16:9f", ip: "192.168.72.206"} in network mk-kindnet-893050
	I0520 12:08:54.535283   66550 main.go:141] libmachine: (kindnet-893050) DBG | Getting to WaitForSSH function...
	I0520 12:08:54.535315   66550 main.go:141] libmachine: (kindnet-893050) Reserved static IP address: 192.168.72.206
	I0520 12:08:54.535328   66550 main.go:141] libmachine: (kindnet-893050) Waiting for SSH to be available...
	I0520 12:08:54.537763   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:54.538064   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050
	I0520 12:08:54.538086   66550 main.go:141] libmachine: (kindnet-893050) DBG | unable to find defined IP address of network mk-kindnet-893050 interface with MAC address 52:54:00:8d:16:9f
	I0520 12:08:54.538200   66550 main.go:141] libmachine: (kindnet-893050) DBG | Using SSH client type: external
	I0520 12:08:54.538218   66550 main.go:141] libmachine: (kindnet-893050) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050/id_rsa (-rw-------)
	I0520 12:08:54.538266   66550 main.go:141] libmachine: (kindnet-893050) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:08:54.538279   66550 main.go:141] libmachine: (kindnet-893050) DBG | About to run SSH command:
	I0520 12:08:54.538292   66550 main.go:141] libmachine: (kindnet-893050) DBG | exit 0
	I0520 12:08:54.542233   66550 main.go:141] libmachine: (kindnet-893050) DBG | SSH cmd err, output: exit status 255: 
	I0520 12:08:54.542252   66550 main.go:141] libmachine: (kindnet-893050) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0520 12:08:54.542258   66550 main.go:141] libmachine: (kindnet-893050) DBG | command : exit 0
	I0520 12:08:54.542264   66550 main.go:141] libmachine: (kindnet-893050) DBG | err     : exit status 255
	I0520 12:08:54.542271   66550 main.go:141] libmachine: (kindnet-893050) DBG | output  : 
	I0520 12:08:57.542880   66550 main.go:141] libmachine: (kindnet-893050) DBG | Getting to WaitForSSH function...
	I0520 12:08:57.545407   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:57.545748   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:57.545788   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:57.546043   66550 main.go:141] libmachine: (kindnet-893050) DBG | Using SSH client type: external
	I0520 12:08:57.546071   66550 main.go:141] libmachine: (kindnet-893050) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050/id_rsa (-rw-------)
	I0520 12:08:57.546121   66550 main.go:141] libmachine: (kindnet-893050) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 12:08:57.546138   66550 main.go:141] libmachine: (kindnet-893050) DBG | About to run SSH command:
	I0520 12:08:57.546162   66550 main.go:141] libmachine: (kindnet-893050) DBG | exit 0
	I0520 12:08:57.673125   66550 main.go:141] libmachine: (kindnet-893050) DBG | SSH cmd err, output: <nil>: 
	I0520 12:08:57.673373   66550 main.go:141] libmachine: (kindnet-893050) KVM machine creation complete!
	I0520 12:08:57.673725   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetConfigRaw
	I0520 12:08:57.674322   66550 main.go:141] libmachine: (kindnet-893050) Calling .DriverName
	I0520 12:08:57.674551   66550 main.go:141] libmachine: (kindnet-893050) Calling .DriverName
	I0520 12:08:57.674747   66550 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0520 12:08:57.674765   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetState
	I0520 12:08:57.675988   66550 main.go:141] libmachine: Detecting operating system of created instance...
	I0520 12:08:57.676006   66550 main.go:141] libmachine: Waiting for SSH to be available...
	I0520 12:08:57.676013   66550 main.go:141] libmachine: Getting to WaitForSSH function...
	I0520 12:08:57.676021   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHHostname
	I0520 12:08:57.678762   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:57.679145   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:57.679171   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:57.679322   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHPort
	I0520 12:08:57.679607   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:57.679785   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:57.679958   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHUsername
	I0520 12:08:57.680134   66550 main.go:141] libmachine: Using SSH client type: native
	I0520 12:08:57.680336   66550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0520 12:08:57.680347   66550 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0520 12:08:57.788541   66550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:08:57.788582   66550 main.go:141] libmachine: Detecting the provisioner...
	I0520 12:08:57.788592   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHHostname
	I0520 12:08:57.791631   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:57.792040   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:57.792087   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:57.792309   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHPort
	I0520 12:08:57.792510   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:57.792708   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:57.792877   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHUsername
	I0520 12:08:57.793106   66550 main.go:141] libmachine: Using SSH client type: native
	I0520 12:08:57.793335   66550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0520 12:08:57.793350   66550 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0520 12:08:57.914180   66550 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0520 12:08:57.914260   66550 main.go:141] libmachine: found compatible host: buildroot
	I0520 12:08:57.914275   66550 main.go:141] libmachine: Provisioning with buildroot...
	I0520 12:08:57.914289   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetMachineName
	I0520 12:08:57.914521   66550 buildroot.go:166] provisioning hostname "kindnet-893050"
	I0520 12:08:57.914546   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetMachineName
	I0520 12:08:57.914689   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHHostname
	I0520 12:08:57.917951   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:57.918298   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:57.918324   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:57.918518   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHPort
	I0520 12:08:57.918720   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:57.918897   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:57.919037   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHUsername
	I0520 12:08:57.919179   66550 main.go:141] libmachine: Using SSH client type: native
	I0520 12:08:57.919389   66550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0520 12:08:57.919408   66550 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-893050 && echo "kindnet-893050" | sudo tee /etc/hostname
	I0520 12:08:58.048865   66550 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-893050
	
	I0520 12:08:58.048895   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHHostname
	I0520 12:08:58.052157   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.052547   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:58.052573   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.052817   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHPort
	I0520 12:08:58.053032   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:58.053175   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:58.053338   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHUsername
	I0520 12:08:58.053512   66550 main.go:141] libmachine: Using SSH client type: native
	I0520 12:08:58.053719   66550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0520 12:08:58.053743   66550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-893050' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-893050/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-893050' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 12:08:58.176604   66550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 12:08:58.176641   66550 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 12:08:58.176674   66550 buildroot.go:174] setting up certificates
	I0520 12:08:58.176687   66550 provision.go:84] configureAuth start
	I0520 12:08:58.176699   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetMachineName
	I0520 12:08:58.176999   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetIP
	I0520 12:08:58.180009   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.180446   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:58.180474   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.180661   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHHostname
	I0520 12:08:58.183091   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.183497   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:58.183524   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.183664   66550 provision.go:143] copyHostCerts
	I0520 12:08:58.183729   66550 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 12:08:58.183742   66550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 12:08:58.183819   66550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 12:08:58.183956   66550 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 12:08:58.183969   66550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 12:08:58.184005   66550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 12:08:58.184091   66550 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 12:08:58.184111   66550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 12:08:58.184142   66550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 12:08:58.184221   66550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.kindnet-893050 san=[127.0.0.1 192.168.72.206 kindnet-893050 localhost minikube]
	I0520 12:08:58.401830   66550 provision.go:177] copyRemoteCerts
	I0520 12:08:58.401889   66550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 12:08:58.401912   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHHostname
	I0520 12:08:58.404938   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.405320   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:58.405354   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.405574   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHPort
	I0520 12:08:58.405778   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:58.405959   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHUsername
	I0520 12:08:58.406107   66550 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050/id_rsa Username:docker}
	I0520 12:08:58.496526   66550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 12:08:58.527343   66550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0520 12:08:58.555342   66550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 12:08:58.587004   66550 provision.go:87] duration metric: took 410.302885ms to configureAuth
	I0520 12:08:58.587035   66550 buildroot.go:189] setting minikube options for container-runtime
	I0520 12:08:58.587243   66550 config.go:182] Loaded profile config "kindnet-893050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:08:58.587327   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHHostname
	I0520 12:08:58.590073   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.590457   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:58.590488   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.590697   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHPort
	I0520 12:08:58.590889   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:58.591081   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:58.591246   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHUsername
	I0520 12:08:58.591416   66550 main.go:141] libmachine: Using SSH client type: native
	I0520 12:08:58.591610   66550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0520 12:08:58.591625   66550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 12:08:58.894696   66550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 12:08:58.894723   66550 main.go:141] libmachine: Checking connection to Docker...
	I0520 12:08:58.894730   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetURL
	I0520 12:08:58.896151   66550 main.go:141] libmachine: (kindnet-893050) DBG | Using libvirt version 6000000
	I0520 12:08:58.898990   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.899372   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:58.899391   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.899566   66550 main.go:141] libmachine: Docker is up and running!
	I0520 12:08:58.899581   66550 main.go:141] libmachine: Reticulating splines...
	I0520 12:08:58.899589   66550 client.go:171] duration metric: took 27.498583642s to LocalClient.Create
	I0520 12:08:58.899614   66550 start.go:167] duration metric: took 27.498655417s to libmachine.API.Create "kindnet-893050"
	I0520 12:08:58.899622   66550 start.go:293] postStartSetup for "kindnet-893050" (driver="kvm2")
	I0520 12:08:58.899636   66550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 12:08:58.899649   66550 main.go:141] libmachine: (kindnet-893050) Calling .DriverName
	I0520 12:08:58.899906   66550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 12:08:58.899929   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHHostname
	I0520 12:08:58.903093   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.903642   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:58.903685   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:58.903893   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHPort
	I0520 12:08:58.904064   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:58.904228   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHUsername
	I0520 12:08:58.904365   66550 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050/id_rsa Username:docker}
	I0520 12:08:58.996780   66550 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 12:08:59.001715   66550 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 12:08:59.001734   66550 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 12:08:59.001782   66550 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 12:08:59.001848   66550 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 12:08:59.001927   66550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 12:08:59.013350   66550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 12:08:59.044078   66550 start.go:296] duration metric: took 144.433204ms for postStartSetup
	I0520 12:08:59.044132   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetConfigRaw
	I0520 12:08:59.044695   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetIP
	I0520 12:08:59.047574   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:59.047916   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:59.047943   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:59.048204   66550 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/kindnet-893050/config.json ...
	I0520 12:08:59.048395   66550 start.go:128] duration metric: took 27.666316991s to createHost
	I0520 12:08:59.048441   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHHostname
	I0520 12:08:59.050813   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:59.051207   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:59.051234   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:59.051330   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHPort
	I0520 12:08:59.051502   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:59.051638   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:59.051760   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHUsername
	I0520 12:08:59.051876   66550 main.go:141] libmachine: Using SSH client type: native
	I0520 12:08:59.052034   66550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0520 12:08:59.052043   66550 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 12:08:59.162652   66550 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716206939.142078246
	
	I0520 12:08:59.162670   66550 fix.go:216] guest clock: 1716206939.142078246
	I0520 12:08:59.162677   66550 fix.go:229] Guest: 2024-05-20 12:08:59.142078246 +0000 UTC Remote: 2024-05-20 12:08:59.048410723 +0000 UTC m=+27.772393378 (delta=93.667523ms)
	I0520 12:08:59.162695   66550 fix.go:200] guest clock delta is within tolerance: 93.667523ms
	I0520 12:08:59.162701   66550 start.go:83] releasing machines lock for "kindnet-893050", held for 27.780717757s
	I0520 12:08:59.162717   66550 main.go:141] libmachine: (kindnet-893050) Calling .DriverName
	I0520 12:08:59.162971   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetIP
	I0520 12:08:59.165896   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:59.166306   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:59.166334   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:59.166606   66550 main.go:141] libmachine: (kindnet-893050) Calling .DriverName
	I0520 12:08:59.167473   66550 main.go:141] libmachine: (kindnet-893050) Calling .DriverName
	I0520 12:08:59.168566   66550 main.go:141] libmachine: (kindnet-893050) Calling .DriverName
	I0520 12:08:59.168641   66550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 12:08:59.168676   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHHostname
	I0520 12:08:59.168805   66550 ssh_runner.go:195] Run: cat /version.json
	I0520 12:08:59.168836   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHHostname
	I0520 12:08:59.171449   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:59.172509   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:59.172551   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:59.172725   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHPort
	I0520 12:08:59.172885   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:59.173090   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHUsername
	I0520 12:08:59.173866   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:59.174247   66550 main.go:141] libmachine: (kindnet-893050) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:16:9f", ip: ""} in network mk-kindnet-893050: {Iface:virbr4 ExpiryTime:2024-05-20 13:08:46 +0000 UTC Type:0 Mac:52:54:00:8d:16:9f Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:kindnet-893050 Clientid:01:52:54:00:8d:16:9f}
	I0520 12:08:59.174273   66550 main.go:141] libmachine: (kindnet-893050) DBG | domain kindnet-893050 has defined IP address 192.168.72.206 and MAC address 52:54:00:8d:16:9f in network mk-kindnet-893050
	I0520 12:08:59.174453   66550 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050/id_rsa Username:docker}
	I0520 12:08:59.174523   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHPort
	I0520 12:08:59.174647   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHKeyPath
	I0520 12:08:59.174743   66550 main.go:141] libmachine: (kindnet-893050) Calling .GetSSHUsername
	I0520 12:08:59.174831   66550 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/kindnet-893050/id_rsa Username:docker}
	W0520 12:08:59.275986   66550 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 12:08:59.276080   66550 ssh_runner.go:195] Run: systemctl --version
	I0520 12:08:59.282870   66550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 12:08:59.463536   66550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 12:08:59.471005   66550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 12:08:59.471069   66550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 12:08:59.489081   66550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 12:08:59.489104   66550 start.go:494] detecting cgroup driver to use...
	I0520 12:08:59.489180   66550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 12:08:59.506211   66550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 12:08:59.521202   66550 docker.go:217] disabling cri-docker service (if available) ...
	I0520 12:08:59.521281   66550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 12:08:59.536531   66550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 12:08:59.551746   66550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 12:08:59.682460   66550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 12:08:59.850540   66550 docker.go:233] disabling docker service ...
	I0520 12:08:59.850612   66550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 12:08:59.866711   66550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 12:08:59.883097   66550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 12:09:00.054231   66550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 12:09:00.199634   66550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 12:09:00.216134   66550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 12:09:00.241772   66550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 12:09:00.241836   66550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:09:00.257716   66550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 12:09:00.257773   66550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:09:00.269688   66550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:09:00.280968   66550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:09:00.292757   66550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 12:09:00.305421   66550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:09:00.316266   66550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:09:00.336886   66550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 12:09:00.348185   66550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 12:09:00.357396   66550 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 12:09:00.357440   66550 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 12:09:00.373213   66550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 12:09:00.383188   66550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 12:09:00.554843   66550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 12:09:00.719522   66550 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 12:09:00.719587   66550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 12:09:00.725463   66550 start.go:562] Will wait 60s for crictl version
	I0520 12:09:00.725535   66550 ssh_runner.go:195] Run: which crictl
	I0520 12:09:00.730041   66550 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 12:09:00.786379   66550 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 12:09:00.786445   66550 ssh_runner.go:195] Run: crio --version
	I0520 12:09:00.821750   66550 ssh_runner.go:195] Run: crio --version
	I0520 12:09:00.865125   66550 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	
	
	==> CRI-O <==
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.535050969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206941535012462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5dddb20d-50d3-4088-8c67-54064af9cb85 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.536054741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bbdbb6b-d973-4895-9f3b-2cb5c736241d name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.536132914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bbdbb6b-d973-4895-9f3b-2cb5c736241d name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.536542804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205626603993284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a564c207edf007abdb4a4af2059a85e47e67255c74fa45f59d4e1ef3805289,PodSandboxId:cd8a8ba27b28171bb60676422bfc26f448e5b022eb4233cc5872976c933894f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205606403280848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 493596c1-61a7-4cb2-9871-f7152e3d2d97,},Annotations:map[string]string{io.kubernetes.container.hash: de174415,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f,PodSandboxId:b93e6c366096581c0e253b3bbf21c1544d3f493293e287ce2bc3bc487eabdb56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205603462656840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-64txr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e755d2ac-2e0d-4088-bf96-9d8aadd69987,},Annotations:map[string]string{io.kubernetes.container.hash: dec2baf2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205595798329664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de,PodSandboxId:da859d16cd2d40491f0508f0055e6eb27f9cc3b3f6bd03f3cfb84fd31e8a94b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205595792199625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6n9d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e07292-bd7f-446a-aa93-3975a4860
ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 24ff0539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577,PodSandboxId:ad71b186b0ca74863c549b872fb564a0c06b63d8348c4e08b7eba9685b263e58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205591147769591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90d757e37a7badc927e5ba58ab24c5e,},Annotations:map[string]string{io.kub
ernetes.container.hash: 100e1eb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d,PodSandboxId:6f6f895fe620b85a0d0ec1cc4a7b0424465dc419c8eeb19378391d704bbd6115,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205591058040879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6c682a551e56a2e6ede4d4686339acf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39,PodSandboxId:0a6bf6121a3b0174806151b497ed94fdbe325af8d88d24f8e3e7a8ef06d23ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205591013047206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9970b56a3c9345e43460a2c31682f14c,},Annotations:map[string]string{io.
kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677,PodSandboxId:92d237b96e279308e9a5d44f47a4d16b5fe0184f91e1e60bfa45cef6692a0314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205591031226925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f0776ab0ee09fc201405c54f684ae9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: df9f78ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bbdbb6b-d973-4895-9f3b-2cb5c736241d name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.589348857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=494d9262-914c-4757-808d-416c204656f8 name=/runtime.v1.RuntimeService/Version
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.589582431Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=494d9262-914c-4757-808d-416c204656f8 name=/runtime.v1.RuntimeService/Version
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.591993904Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b47afd24-f961-4bfb-b2f4-0f47cd01351e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.592379931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206941592359197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b47afd24-f961-4bfb-b2f4-0f47cd01351e name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.593084653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e149226f-94b2-4408-a9e6-897f7bb6e1f8 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.593188928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e149226f-94b2-4408-a9e6-897f7bb6e1f8 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.593593533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205626603993284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a564c207edf007abdb4a4af2059a85e47e67255c74fa45f59d4e1ef3805289,PodSandboxId:cd8a8ba27b28171bb60676422bfc26f448e5b022eb4233cc5872976c933894f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205606403280848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 493596c1-61a7-4cb2-9871-f7152e3d2d97,},Annotations:map[string]string{io.kubernetes.container.hash: de174415,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f,PodSandboxId:b93e6c366096581c0e253b3bbf21c1544d3f493293e287ce2bc3bc487eabdb56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205603462656840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-64txr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e755d2ac-2e0d-4088-bf96-9d8aadd69987,},Annotations:map[string]string{io.kubernetes.container.hash: dec2baf2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205595798329664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de,PodSandboxId:da859d16cd2d40491f0508f0055e6eb27f9cc3b3f6bd03f3cfb84fd31e8a94b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205595792199625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6n9d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e07292-bd7f-446a-aa93-3975a4860
ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 24ff0539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577,PodSandboxId:ad71b186b0ca74863c549b872fb564a0c06b63d8348c4e08b7eba9685b263e58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205591147769591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90d757e37a7badc927e5ba58ab24c5e,},Annotations:map[string]string{io.kub
ernetes.container.hash: 100e1eb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d,PodSandboxId:6f6f895fe620b85a0d0ec1cc4a7b0424465dc419c8eeb19378391d704bbd6115,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205591058040879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6c682a551e56a2e6ede4d4686339acf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39,PodSandboxId:0a6bf6121a3b0174806151b497ed94fdbe325af8d88d24f8e3e7a8ef06d23ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205591013047206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9970b56a3c9345e43460a2c31682f14c,},Annotations:map[string]string{io.
kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677,PodSandboxId:92d237b96e279308e9a5d44f47a4d16b5fe0184f91e1e60bfa45cef6692a0314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205591031226925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f0776ab0ee09fc201405c54f684ae9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: df9f78ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e149226f-94b2-4408-a9e6-897f7bb6e1f8 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.670703732Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3081692b-36c4-4bfe-bd28-8d873d2939b4 name=/runtime.v1.RuntimeService/Version
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.670799800Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3081692b-36c4-4bfe-bd28-8d873d2939b4 name=/runtime.v1.RuntimeService/Version
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.673063800Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62279e4c-df95-48d9-8dcd-ceafaabc659c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.673932089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206941673898309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62279e4c-df95-48d9-8dcd-ceafaabc659c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.674636265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=650e033a-eab9-4c34-a51d-f955b135a0cd name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.674707569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=650e033a-eab9-4c34-a51d-f955b135a0cd name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.675053250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205626603993284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a564c207edf007abdb4a4af2059a85e47e67255c74fa45f59d4e1ef3805289,PodSandboxId:cd8a8ba27b28171bb60676422bfc26f448e5b022eb4233cc5872976c933894f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205606403280848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 493596c1-61a7-4cb2-9871-f7152e3d2d97,},Annotations:map[string]string{io.kubernetes.container.hash: de174415,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f,PodSandboxId:b93e6c366096581c0e253b3bbf21c1544d3f493293e287ce2bc3bc487eabdb56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205603462656840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-64txr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e755d2ac-2e0d-4088-bf96-9d8aadd69987,},Annotations:map[string]string{io.kubernetes.container.hash: dec2baf2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205595798329664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de,PodSandboxId:da859d16cd2d40491f0508f0055e6eb27f9cc3b3f6bd03f3cfb84fd31e8a94b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205595792199625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6n9d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e07292-bd7f-446a-aa93-3975a4860
ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 24ff0539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577,PodSandboxId:ad71b186b0ca74863c549b872fb564a0c06b63d8348c4e08b7eba9685b263e58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205591147769591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90d757e37a7badc927e5ba58ab24c5e,},Annotations:map[string]string{io.kub
ernetes.container.hash: 100e1eb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d,PodSandboxId:6f6f895fe620b85a0d0ec1cc4a7b0424465dc419c8eeb19378391d704bbd6115,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205591058040879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6c682a551e56a2e6ede4d4686339acf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39,PodSandboxId:0a6bf6121a3b0174806151b497ed94fdbe325af8d88d24f8e3e7a8ef06d23ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205591013047206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9970b56a3c9345e43460a2c31682f14c,},Annotations:map[string]string{io.
kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677,PodSandboxId:92d237b96e279308e9a5d44f47a4d16b5fe0184f91e1e60bfa45cef6692a0314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205591031226925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f0776ab0ee09fc201405c54f684ae9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: df9f78ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=650e033a-eab9-4c34-a51d-f955b135a0cd name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.721613472Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1938c0dd-d329-48e7-b389-9e013443cd17 name=/runtime.v1.RuntimeService/Version
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.721729133Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1938c0dd-d329-48e7-b389-9e013443cd17 name=/runtime.v1.RuntimeService/Version
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.723580614Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c111c11-f95e-49fc-be33-0eda03f98fc5 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.724373797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206941724140100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c111c11-f95e-49fc-be33-0eda03f98fc5 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.725109935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3d216ed-262e-45a5-97ec-2216645bc89f name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.725216033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3d216ed-262e-45a5-97ec-2216645bc89f name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:01 embed-certs-274976 crio[728]: time="2024-05-20 12:09:01.725823399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205626603993284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5a564c207edf007abdb4a4af2059a85e47e67255c74fa45f59d4e1ef3805289,PodSandboxId:cd8a8ba27b28171bb60676422bfc26f448e5b022eb4233cc5872976c933894f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205606403280848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 493596c1-61a7-4cb2-9871-f7152e3d2d97,},Annotations:map[string]string{io.kubernetes.container.hash: de174415,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f,PodSandboxId:b93e6c366096581c0e253b3bbf21c1544d3f493293e287ce2bc3bc487eabdb56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205603462656840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-64txr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e755d2ac-2e0d-4088-bf96-9d8aadd69987,},Annotations:map[string]string{io.kubernetes.container.hash: dec2baf2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e,PodSandboxId:7581a88a8cf1d7466c5a0f961161aa8225d73e2fdf05a02a18457e595e148004,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205595798329664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
86700a9a-3778-4da8-8b62-e87269b0ffd2,},Annotations:map[string]string{io.kubernetes.container.hash: ca417f2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de,PodSandboxId:da859d16cd2d40491f0508f0055e6eb27f9cc3b3f6bd03f3cfb84fd31e8a94b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205595792199625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6n9d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e07292-bd7f-446a-aa93-3975a4860
ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 24ff0539,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577,PodSandboxId:ad71b186b0ca74863c549b872fb564a0c06b63d8348c4e08b7eba9685b263e58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205591147769591,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90d757e37a7badc927e5ba58ab24c5e,},Annotations:map[string]string{io.kub
ernetes.container.hash: 100e1eb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d,PodSandboxId:6f6f895fe620b85a0d0ec1cc4a7b0424465dc419c8eeb19378391d704bbd6115,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205591058040879,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6c682a551e56a2e6ede4d4686339acf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39,PodSandboxId:0a6bf6121a3b0174806151b497ed94fdbe325af8d88d24f8e3e7a8ef06d23ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205591013047206,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9970b56a3c9345e43460a2c31682f14c,},Annotations:map[string]string{io.
kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677,PodSandboxId:92d237b96e279308e9a5d44f47a4d16b5fe0184f91e1e60bfa45cef6692a0314,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205591031226925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55f0776ab0ee09fc201405c54f684ae9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: df9f78ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3d216ed-262e-45a5-97ec-2216645bc89f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aa00e0ad2982a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   7581a88a8cf1d       storage-provisioner
	f5a564c207edf       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   cd8a8ba27b281       busybox
	618ca565a4457       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   b93e6c3660965       coredns-7db6d8ff4d-64txr
	d0a1a121eec8a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   7581a88a8cf1d       storage-provisioner
	11f7139d84207       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      22 minutes ago      Running             kube-proxy                1                   da859d16cd2d4       kube-proxy-6n9d5
	e3ebf27b170f9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      22 minutes ago      Running             etcd                      1                   ad71b186b0ca7       etcd-embed-certs-274976
	c19fee6b7c2fa       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      22 minutes ago      Running             kube-scheduler            1                   6f6f895fe620b       kube-scheduler-embed-certs-274976
	4bac0296aa85f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      22 minutes ago      Running             kube-apiserver            1                   92d237b96e279       kube-apiserver-embed-certs-274976
	f8d4797eeb87b       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      22 minutes ago      Running             kube-controller-manager   1                   0a6bf6121a3b0       kube-controller-manager-embed-certs-274976
	
	
	==> coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46534 - 38642 "HINFO IN 792029033212554799.8824431450542217466. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01370272s
	
	
	==> describe nodes <==
	Name:               embed-certs-274976
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-274976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=embed-certs-274976
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T11_39_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:39:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-274976
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:08:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:07:28 +0000   Mon, 20 May 2024 11:39:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:07:28 +0000   Mon, 20 May 2024 11:39:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:07:28 +0000   Mon, 20 May 2024 11:39:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:07:28 +0000   Mon, 20 May 2024 11:46:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.167
	  Hostname:    embed-certs-274976
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e25c7aa652ce41ee982539bff8a01180
	  System UUID:                e25c7aa6-52ce-41ee-9825-39bff8a01180
	  Boot ID:                    60e737e0-598a-439c-9784-642b1f4d46f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 coredns-7db6d8ff4d-64txr                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-274976                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-274976             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-274976    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-6n9d5                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-274976             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-ll4n8               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-274976 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-274976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-274976 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-274976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-274976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-274976 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node embed-certs-274976 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-274976 event: Registered Node embed-certs-274976 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-274976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-274976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-274976 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-274976 event: Registered Node embed-certs-274976 in Controller
	
	
	==> dmesg <==
	[May20 11:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055084] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043332] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.714849] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.099291] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609851] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000034] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.607833] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.057057] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068624] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.186228] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.157335] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.303954] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.505438] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.062932] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.229661] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +5.596494] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.932714] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +4.630424] kauditd_printk_skb: 78 callbacks suppressed
	[May20 11:47] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] <==
	{"level":"info","ts":"2024-05-20T11:46:51.362527Z","caller":"traceutil/trace.go:171","msg":"trace[1291265025] linearizableReadLoop","detail":"{readStateIndex:616; appliedIndex:615; }","duration":"236.984464ms","start":"2024-05-20T11:46:51.125525Z","end":"2024-05-20T11:46:51.36251Z","steps":["trace[1291265025] 'read index received'  (duration: 29.37µs)","trace[1291265025] 'applied index is now lower than readState.Index'  (duration: 236.95378ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T11:47:12.879672Z","caller":"traceutil/trace.go:171","msg":"trace[1176906875] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"108.759041ms","start":"2024-05-20T11:47:12.770878Z","end":"2024-05-20T11:47:12.879637Z","steps":["trace[1176906875] 'process raft request'  (duration: 108.393866ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:47:15.542237Z","caller":"traceutil/trace.go:171","msg":"trace[1535775734] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"142.425198ms","start":"2024-05-20T11:47:15.399765Z","end":"2024-05-20T11:47:15.54219Z","steps":["trace[1535775734] 'process raft request'  (duration: 142.265358ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:56:32.99685Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":832}
	{"level":"info","ts":"2024-05-20T11:56:33.006299Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":832,"took":"9.080349ms","hash":2871989158,"current-db-size-bytes":2146304,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2146304,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-20T11:56:33.006361Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2871989158,"revision":832,"compact-revision":-1}
	{"level":"info","ts":"2024-05-20T12:01:33.004626Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1074}
	{"level":"info","ts":"2024-05-20T12:01:33.009021Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1074,"took":"3.501441ms","hash":523178115,"current-db-size-bytes":2146304,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1130496,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2024-05-20T12:01:33.009108Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":523178115,"revision":1074,"compact-revision":832}
	{"level":"info","ts":"2024-05-20T12:06:33.013803Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1318}
	{"level":"info","ts":"2024-05-20T12:06:33.017363Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1318,"took":"3.239251ms","hash":2098702051,"current-db-size-bytes":2146304,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1105920,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2024-05-20T12:06:33.017463Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2098702051,"revision":1318,"compact-revision":1074}
	{"level":"info","ts":"2024-05-20T12:07:08.436723Z","caller":"traceutil/trace.go:171","msg":"trace[574505572] transaction","detail":"{read_only:false; response_revision:1590; number_of_response:1; }","duration":"303.578879ms","start":"2024-05-20T12:07:08.133093Z","end":"2024-05-20T12:07:08.436672Z","steps":["trace[574505572] 'process raft request'  (duration: 303.50081ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:07:08.437529Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:07:08.133079Z","time spent":"303.742622ms","remote":"127.0.0.1:46844","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ai34h4v3rpefqvrmyjgc6pj3dy\" mod_revision:1582 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ai34h4v3rpefqvrmyjgc6pj3dy\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ai34h4v3rpefqvrmyjgc6pj3dy\" > >"}
	{"level":"warn","ts":"2024-05-20T12:07:08.687961Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.302084ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:07:08.688066Z","caller":"traceutil/trace.go:171","msg":"trace[865726679] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1590; }","duration":"127.434062ms","start":"2024-05-20T12:07:08.560607Z","end":"2024-05-20T12:07:08.688041Z","steps":["trace[865726679] 'range keys from in-memory index tree'  (duration: 127.239082ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:07:08.688464Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.526052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:611"}
	{"level":"info","ts":"2024-05-20T12:07:08.688552Z","caller":"traceutil/trace.go:171","msg":"trace[1838942439] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1590; }","duration":"216.64685ms","start":"2024-05-20T12:07:08.47189Z","end":"2024-05-20T12:07:08.688536Z","steps":["trace[1838942439] 'range keys from in-memory index tree'  (duration: 216.358247ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:07:08.817541Z","caller":"traceutil/trace.go:171","msg":"trace[2035937283] transaction","detail":"{read_only:false; response_revision:1591; number_of_response:1; }","duration":"124.234898ms","start":"2024-05-20T12:07:08.693287Z","end":"2024-05-20T12:07:08.817522Z","steps":["trace[2035937283] 'process raft request'  (duration: 123.833757ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T12:08:43.740608Z","caller":"traceutil/trace.go:171","msg":"trace[261973817] linearizableReadLoop","detail":"{readStateIndex:1975; appliedIndex:1974; }","duration":"186.342312ms","start":"2024-05-20T12:08:43.554203Z","end":"2024-05-20T12:08:43.740545Z","steps":["trace[261973817] 'read index received'  (duration: 186.115692ms)","trace[261973817] 'applied index is now lower than readState.Index'  (duration: 225.602µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-20T12:08:43.740803Z","caller":"traceutil/trace.go:171","msg":"trace[1793268472] transaction","detail":"{read_only:false; response_revision:1669; number_of_response:1; }","duration":"223.471125ms","start":"2024-05-20T12:08:43.517319Z","end":"2024-05-20T12:08:43.74079Z","steps":["trace[1793268472] 'process raft request'  (duration: 223.042066ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:08:43.740986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.703546ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-20T12:08:43.741074Z","caller":"traceutil/trace.go:171","msg":"trace[1226766006] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1669; }","duration":"186.878159ms","start":"2024-05-20T12:08:43.554179Z","end":"2024-05-20T12:08:43.741057Z","steps":["trace[1226766006] 'agreement among raft nodes before linearized reading'  (duration: 186.706689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:08:43.741274Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.917263ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-20T12:08:43.741322Z","caller":"traceutil/trace.go:171","msg":"trace[847390355] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:0; response_revision:1669; }","duration":"182.996756ms","start":"2024-05-20T12:08:43.558318Z","end":"2024-05-20T12:08:43.741315Z","steps":["trace[847390355] 'agreement among raft nodes before linearized reading'  (duration: 182.920534ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:09:02 up 22 min,  0 users,  load average: 0.19, 0.23, 0.18
	Linux embed-certs-274976 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] <==
	I0520 12:02:35.393812       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:04:35.391769       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:04:35.392031       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 12:04:35.392059       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:04:35.394690       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:04:35.394836       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 12:04:35.394874       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:06:34.397552       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:06:34.397659       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0520 12:06:35.398585       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:06:35.398646       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 12:06:35.398655       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:06:35.398722       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:06:35.398769       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 12:06:35.399896       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:07:35.398888       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:07:35.398976       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 12:07:35.398985       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:07:35.400057       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:07:35.400209       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 12:07:35.400251       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] <==
	I0520 12:03:17.751732       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:03:47.210870       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:03:47.759615       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:04:17.215695       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:04:17.775241       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:04:47.220009       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:04:47.782579       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:05:17.225905       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:05:17.790381       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:05:47.231669       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:05:47.799093       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:06:17.236160       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:06:17.812129       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:06:47.240960       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:06:47.820742       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:07:17.246769       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:07:17.828833       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:07:47.252161       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:07:47.410501       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="1.047894ms"
	I0520 12:07:47.838085       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0520 12:08:00.413062       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="150.439µs"
	E0520 12:08:17.257891       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:08:17.853034       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:08:47.262368       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:08:47.862992       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] <==
	I0520 11:46:36.023136       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:46:36.034520       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.167"]
	I0520 11:46:36.124627       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:46:36.124983       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:46:36.125336       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:46:36.130154       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:46:36.130573       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:46:36.131194       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:46:36.132254       1 config.go:192] "Starting service config controller"
	I0520 11:46:36.132285       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:46:36.132324       1 config.go:319] "Starting node config controller"
	I0520 11:46:36.132328       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:46:36.133098       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:46:36.133173       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:46:36.233050       1 shared_informer.go:320] Caches are synced for node config
	I0520 11:46:36.233094       1 shared_informer.go:320] Caches are synced for service config
	I0520 11:46:36.234849       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] <==
	I0520 11:46:34.373159       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 11:46:34.373224       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:46:34.381489       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 11:46:34.381537       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:46:34.382470       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 11:46:34.382567       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	W0520 11:46:34.386804       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:46:34.386903       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 11:46:34.391187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0520 11:46:34.391245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0520 11:46:34.397584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0520 11:46:34.397639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0520 11:46:34.397708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0520 11:46:34.397748       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0520 11:46:34.397830       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0520 11:46:34.397859       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0520 11:46:34.398084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0520 11:46:34.398134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0520 11:46:34.398203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0520 11:46:34.398245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0520 11:46:34.398346       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0520 11:46:34.398441       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W0520 11:46:34.398524       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0520 11:46:34.398561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	I0520 11:46:34.481766       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 12:06:30 embed-certs-274976 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:06:35 embed-certs-274976 kubelet[940]: E0520 12:06:35.395146     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 12:06:50 embed-certs-274976 kubelet[940]: E0520 12:06:50.396817     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 12:07:04 embed-certs-274976 kubelet[940]: E0520 12:07:04.394361     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 12:07:19 embed-certs-274976 kubelet[940]: E0520 12:07:19.395725     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 12:07:30 embed-certs-274976 kubelet[940]: E0520 12:07:30.409711     940 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:07:30 embed-certs-274976 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:07:30 embed-certs-274976 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:07:30 embed-certs-274976 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:07:30 embed-certs-274976 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:07:34 embed-certs-274976 kubelet[940]: E0520 12:07:34.428190     940 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	May 20 12:07:34 embed-certs-274976 kubelet[940]: E0520 12:07:34.428299     940 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	May 20 12:07:34 embed-certs-274976 kubelet[940]: E0520 12:07:34.428663     940 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjrht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-ll4n8_kube-system(33cfa4c5-e053-4109-8a22-9abc96828f1b): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	May 20 12:07:34 embed-certs-274976 kubelet[940]: E0520 12:07:34.428753     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 12:07:47 embed-certs-274976 kubelet[940]: E0520 12:07:47.394515     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 12:08:00 embed-certs-274976 kubelet[940]: E0520 12:08:00.393998     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 12:08:15 embed-certs-274976 kubelet[940]: E0520 12:08:15.393980     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 12:08:28 embed-certs-274976 kubelet[940]: E0520 12:08:28.394848     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 12:08:30 embed-certs-274976 kubelet[940]: E0520 12:08:30.407184     940 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:08:30 embed-certs-274976 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:08:30 embed-certs-274976 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:08:30 embed-certs-274976 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:08:30 embed-certs-274976 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:08:43 embed-certs-274976 kubelet[940]: E0520 12:08:43.394582     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	May 20 12:08:58 embed-certs-274976 kubelet[940]: E0520 12:08:58.396639     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-ll4n8" podUID="33cfa4c5-e053-4109-8a22-9abc96828f1b"
	
	
	==> storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] <==
	I0520 11:47:06.701610       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 11:47:06.714580       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 11:47:06.714672       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 11:47:06.727779       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 11:47:06.727922       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-274976_9a14e2dd-e833-40f7-968d-10ce61fe7437!
	I0520 11:47:06.728228       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2532f4d0-0cad-4897-ae61-4ccb20239f7d", APIVersion:"v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-274976_9a14e2dd-e833-40f7-968d-10ce61fe7437 became leader
	I0520 11:47:06.828665       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-274976_9a14e2dd-e833-40f7-968d-10ce61fe7437!
	
	
	==> storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] <==
	I0520 11:46:35.963255       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0520 11:47:05.972247       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-274976 -n embed-certs-274976
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-274976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-ll4n8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-274976 describe pod metrics-server-569cc877fc-ll4n8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-274976 describe pod metrics-server-569cc877fc-ll4n8: exit status 1 (82.680318ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-ll4n8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-274976 describe pod metrics-server-569cc877fc-ll4n8: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (538.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-05-20 12:09:28.793889536 +0000 UTC m=+6538.505772389
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-668445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-668445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (71.844166ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-668445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-668445 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-668445 logs -n 25: (1.619185898s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-893050 sudo journalctl                       | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo cat                              | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo cat                              | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo systemctl                        | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo systemctl                        | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo cat                              | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo docker                           | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo systemctl                        | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo systemctl                        | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo cat                              | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| delete  | -p embed-certs-274976                                | embed-certs-274976    | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	| ssh     | -p auto-893050 sudo cat                              | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo                                  | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo systemctl                        | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| start   | -p calico-893050 --memory=3072                       | calico-893050         | jenkins | v1.33.1 | 20 May 24 12:09 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo systemctl                        | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo cat                              | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo cat                              | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo containerd                       | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo systemctl                        | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo systemctl                        | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo find                             | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-893050 sudo crio                             | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-893050                                       | auto-893050           | jenkins | v1.33.1 | 20 May 24 12:09 UTC | 20 May 24 12:09 UTC |
	| start   | -p custom-flannel-893050                             | custom-flannel-893050 | jenkins | v1.33.1 | 20 May 24 12:09 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 12:09:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 12:09:08.614154   68503 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:09:08.614345   68503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:09:08.614364   68503 out.go:304] Setting ErrFile to fd 2...
	I0520 12:09:08.614372   68503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:09:08.614689   68503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 12:09:08.615655   68503 out.go:298] Setting JSON to false
	I0520 12:09:08.617410   68503 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6692,"bootTime":1716200257,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:09:08.617491   68503 start.go:139] virtualization: kvm guest
	I0520 12:09:08.619867   68503 out.go:177] * [custom-flannel-893050] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:09:08.621257   68503 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 12:09:08.621213   68503 notify.go:220] Checking for updates...
	I0520 12:09:08.622521   68503 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:09:08.623736   68503 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 12:09:08.625045   68503 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 12:09:08.626174   68503 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 12:09:08.627420   68503 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 12:09:08.628978   68503 config.go:182] Loaded profile config "calico-893050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:09:08.629073   68503 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:09:08.629147   68503 config.go:182] Loaded profile config "kindnet-893050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:09:08.629231   68503 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:09:08.668003   68503 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 12:09:08.669203   68503 start.go:297] selected driver: kvm2
	I0520 12:09:08.669228   68503 start.go:901] validating driver "kvm2" against <nil>
	I0520 12:09:08.669241   68503 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 12:09:08.670204   68503 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:09:08.670301   68503 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 12:09:08.687663   68503 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 12:09:08.687708   68503 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 12:09:08.687946   68503 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 12:09:08.688026   68503 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0520 12:09:08.688060   68503 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0520 12:09:08.688130   68503 start.go:340] cluster config:
	{Name:custom-flannel-893050 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-893050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:09:08.688223   68503 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:09:08.690131   68503 out.go:177] * Starting "custom-flannel-893050" primary control-plane node in "custom-flannel-893050" cluster
	I0520 12:09:04.425185   68064 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 12:09:04.425340   68064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:09:04.425378   68064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:09:04.441614   68064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42475
	I0520 12:09:04.442214   68064 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:09:04.442999   68064 main.go:141] libmachine: Using API Version  1
	I0520 12:09:04.443029   68064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:09:04.443431   68064 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:09:04.443684   68064 main.go:141] libmachine: (calico-893050) Calling .GetMachineName
	I0520 12:09:04.443851   68064 main.go:141] libmachine: (calico-893050) Calling .DriverName
	I0520 12:09:04.444042   68064 start.go:159] libmachine.API.Create for "calico-893050" (driver="kvm2")
	I0520 12:09:04.444079   68064 client.go:168] LocalClient.Create starting
	I0520 12:09:04.444128   68064 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 12:09:04.444169   68064 main.go:141] libmachine: Decoding PEM data...
	I0520 12:09:04.444190   68064 main.go:141] libmachine: Parsing certificate...
	I0520 12:09:04.444267   68064 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 12:09:04.444296   68064 main.go:141] libmachine: Decoding PEM data...
	I0520 12:09:04.444314   68064 main.go:141] libmachine: Parsing certificate...
	I0520 12:09:04.444346   68064 main.go:141] libmachine: Running pre-create checks...
	I0520 12:09:04.444359   68064 main.go:141] libmachine: (calico-893050) Calling .PreCreateCheck
	I0520 12:09:04.444802   68064 main.go:141] libmachine: (calico-893050) Calling .GetConfigRaw
	I0520 12:09:04.445308   68064 main.go:141] libmachine: Creating machine...
	I0520 12:09:04.445323   68064 main.go:141] libmachine: (calico-893050) Calling .Create
	I0520 12:09:04.445486   68064 main.go:141] libmachine: (calico-893050) Creating KVM machine...
	I0520 12:09:04.446978   68064 main.go:141] libmachine: (calico-893050) DBG | found existing default KVM network
	I0520 12:09:04.448667   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:04.448492   68097 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:70:e0:e9} reservation:<nil>}
	I0520 12:09:04.450306   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:04.450186   68097 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003520e0}
	I0520 12:09:04.450328   68064 main.go:141] libmachine: (calico-893050) DBG | created network xml: 
	I0520 12:09:04.450342   68064 main.go:141] libmachine: (calico-893050) DBG | <network>
	I0520 12:09:04.450352   68064 main.go:141] libmachine: (calico-893050) DBG |   <name>mk-calico-893050</name>
	I0520 12:09:04.450365   68064 main.go:141] libmachine: (calico-893050) DBG |   <dns enable='no'/>
	I0520 12:09:04.450375   68064 main.go:141] libmachine: (calico-893050) DBG |   
	I0520 12:09:04.450387   68064 main.go:141] libmachine: (calico-893050) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0520 12:09:04.450397   68064 main.go:141] libmachine: (calico-893050) DBG |     <dhcp>
	I0520 12:09:04.450408   68064 main.go:141] libmachine: (calico-893050) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0520 12:09:04.450422   68064 main.go:141] libmachine: (calico-893050) DBG |     </dhcp>
	I0520 12:09:04.450432   68064 main.go:141] libmachine: (calico-893050) DBG |   </ip>
	I0520 12:09:04.450438   68064 main.go:141] libmachine: (calico-893050) DBG |   
	I0520 12:09:04.450451   68064 main.go:141] libmachine: (calico-893050) DBG | </network>
	I0520 12:09:04.450484   68064 main.go:141] libmachine: (calico-893050) DBG | 
	I0520 12:09:04.456459   68064 main.go:141] libmachine: (calico-893050) DBG | trying to create private KVM network mk-calico-893050 192.168.50.0/24...
	I0520 12:09:04.568447   68064 main.go:141] libmachine: (calico-893050) DBG | private KVM network mk-calico-893050 192.168.50.0/24 created
	I0520 12:09:04.568493   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:04.568411   68097 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 12:09:04.568520   68064 main.go:141] libmachine: (calico-893050) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/calico-893050 ...
	I0520 12:09:04.568529   68064 main.go:141] libmachine: (calico-893050) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:09:04.568550   68064 main.go:141] libmachine: (calico-893050) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:09:04.820811   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:04.820708   68097 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/calico-893050/id_rsa...
	I0520 12:09:04.983644   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:04.983518   68097 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/calico-893050/calico-893050.rawdisk...
	I0520 12:09:04.983681   68064 main.go:141] libmachine: (calico-893050) DBG | Writing magic tar header
	I0520 12:09:04.983696   68064 main.go:141] libmachine: (calico-893050) DBG | Writing SSH key tar header
	I0520 12:09:04.983713   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:04.983668   68097 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/calico-893050 ...
	I0520 12:09:04.983869   68064 main.go:141] libmachine: (calico-893050) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/calico-893050 (perms=drwx------)
	I0520 12:09:04.983898   68064 main.go:141] libmachine: (calico-893050) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:09:04.983910   68064 main.go:141] libmachine: (calico-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/calico-893050
	I0520 12:09:04.983928   68064 main.go:141] libmachine: (calico-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 12:09:04.983942   68064 main.go:141] libmachine: (calico-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 12:09:04.983956   68064 main.go:141] libmachine: (calico-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 12:09:04.983968   68064 main.go:141] libmachine: (calico-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:09:04.983984   68064 main.go:141] libmachine: (calico-893050) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:09:04.983996   68064 main.go:141] libmachine: (calico-893050) DBG | Checking permissions on dir: /home
	I0520 12:09:04.984010   68064 main.go:141] libmachine: (calico-893050) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 12:09:04.984026   68064 main.go:141] libmachine: (calico-893050) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 12:09:04.984038   68064 main.go:141] libmachine: (calico-893050) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:09:04.984051   68064 main.go:141] libmachine: (calico-893050) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:09:04.984061   68064 main.go:141] libmachine: (calico-893050) Creating domain...
	I0520 12:09:04.984073   68064 main.go:141] libmachine: (calico-893050) DBG | Skipping /home - not owner
	I0520 12:09:04.985117   68064 main.go:141] libmachine: (calico-893050) define libvirt domain using xml: 
	I0520 12:09:04.985139   68064 main.go:141] libmachine: (calico-893050) <domain type='kvm'>
	I0520 12:09:04.985149   68064 main.go:141] libmachine: (calico-893050)   <name>calico-893050</name>
	I0520 12:09:04.985158   68064 main.go:141] libmachine: (calico-893050)   <memory unit='MiB'>3072</memory>
	I0520 12:09:04.985164   68064 main.go:141] libmachine: (calico-893050)   <vcpu>2</vcpu>
	I0520 12:09:04.985168   68064 main.go:141] libmachine: (calico-893050)   <features>
	I0520 12:09:04.985173   68064 main.go:141] libmachine: (calico-893050)     <acpi/>
	I0520 12:09:04.985179   68064 main.go:141] libmachine: (calico-893050)     <apic/>
	I0520 12:09:04.985186   68064 main.go:141] libmachine: (calico-893050)     <pae/>
	I0520 12:09:04.985209   68064 main.go:141] libmachine: (calico-893050)     
	I0520 12:09:04.985228   68064 main.go:141] libmachine: (calico-893050)   </features>
	I0520 12:09:04.985240   68064 main.go:141] libmachine: (calico-893050)   <cpu mode='host-passthrough'>
	I0520 12:09:04.985250   68064 main.go:141] libmachine: (calico-893050)   
	I0520 12:09:04.985260   68064 main.go:141] libmachine: (calico-893050)   </cpu>
	I0520 12:09:04.985267   68064 main.go:141] libmachine: (calico-893050)   <os>
	I0520 12:09:04.985275   68064 main.go:141] libmachine: (calico-893050)     <type>hvm</type>
	I0520 12:09:04.985287   68064 main.go:141] libmachine: (calico-893050)     <boot dev='cdrom'/>
	I0520 12:09:04.985296   68064 main.go:141] libmachine: (calico-893050)     <boot dev='hd'/>
	I0520 12:09:04.985334   68064 main.go:141] libmachine: (calico-893050)     <bootmenu enable='no'/>
	I0520 12:09:04.985353   68064 main.go:141] libmachine: (calico-893050)   </os>
	I0520 12:09:04.985364   68064 main.go:141] libmachine: (calico-893050)   <devices>
	I0520 12:09:04.985377   68064 main.go:141] libmachine: (calico-893050)     <disk type='file' device='cdrom'>
	I0520 12:09:04.985394   68064 main.go:141] libmachine: (calico-893050)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/calico-893050/boot2docker.iso'/>
	I0520 12:09:04.985405   68064 main.go:141] libmachine: (calico-893050)       <target dev='hdc' bus='scsi'/>
	I0520 12:09:04.985414   68064 main.go:141] libmachine: (calico-893050)       <readonly/>
	I0520 12:09:04.985422   68064 main.go:141] libmachine: (calico-893050)     </disk>
	I0520 12:09:04.985434   68064 main.go:141] libmachine: (calico-893050)     <disk type='file' device='disk'>
	I0520 12:09:04.985449   68064 main.go:141] libmachine: (calico-893050)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:09:04.985467   68064 main.go:141] libmachine: (calico-893050)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/calico-893050/calico-893050.rawdisk'/>
	I0520 12:09:04.985479   68064 main.go:141] libmachine: (calico-893050)       <target dev='hda' bus='virtio'/>
	I0520 12:09:04.985491   68064 main.go:141] libmachine: (calico-893050)     </disk>
	I0520 12:09:04.985502   68064 main.go:141] libmachine: (calico-893050)     <interface type='network'>
	I0520 12:09:04.985512   68064 main.go:141] libmachine: (calico-893050)       <source network='mk-calico-893050'/>
	I0520 12:09:04.985525   68064 main.go:141] libmachine: (calico-893050)       <model type='virtio'/>
	I0520 12:09:04.985538   68064 main.go:141] libmachine: (calico-893050)     </interface>
	I0520 12:09:04.985549   68064 main.go:141] libmachine: (calico-893050)     <interface type='network'>
	I0520 12:09:04.985563   68064 main.go:141] libmachine: (calico-893050)       <source network='default'/>
	I0520 12:09:04.985575   68064 main.go:141] libmachine: (calico-893050)       <model type='virtio'/>
	I0520 12:09:04.985587   68064 main.go:141] libmachine: (calico-893050)     </interface>
	I0520 12:09:04.985612   68064 main.go:141] libmachine: (calico-893050)     <serial type='pty'>
	I0520 12:09:04.985629   68064 main.go:141] libmachine: (calico-893050)       <target port='0'/>
	I0520 12:09:04.985642   68064 main.go:141] libmachine: (calico-893050)     </serial>
	I0520 12:09:04.985652   68064 main.go:141] libmachine: (calico-893050)     <console type='pty'>
	I0520 12:09:04.985664   68064 main.go:141] libmachine: (calico-893050)       <target type='serial' port='0'/>
	I0520 12:09:04.985674   68064 main.go:141] libmachine: (calico-893050)     </console>
	I0520 12:09:04.985683   68064 main.go:141] libmachine: (calico-893050)     <rng model='virtio'>
	I0520 12:09:04.985692   68064 main.go:141] libmachine: (calico-893050)       <backend model='random'>/dev/random</backend>
	I0520 12:09:04.985704   68064 main.go:141] libmachine: (calico-893050)     </rng>
	I0520 12:09:04.985714   68064 main.go:141] libmachine: (calico-893050)     
	I0520 12:09:04.985724   68064 main.go:141] libmachine: (calico-893050)     
	I0520 12:09:04.985736   68064 main.go:141] libmachine: (calico-893050)   </devices>
	I0520 12:09:04.985747   68064 main.go:141] libmachine: (calico-893050) </domain>
	I0520 12:09:04.985762   68064 main.go:141] libmachine: (calico-893050) 
	I0520 12:09:04.990578   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:d2:d3:3a in network default
	I0520 12:09:04.991121   68064 main.go:141] libmachine: (calico-893050) Ensuring networks are active...
	I0520 12:09:04.991141   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:04.992109   68064 main.go:141] libmachine: (calico-893050) Ensuring network default is active
	I0520 12:09:04.992523   68064 main.go:141] libmachine: (calico-893050) Ensuring network mk-calico-893050 is active
	I0520 12:09:04.993184   68064 main.go:141] libmachine: (calico-893050) Getting domain xml...
	I0520 12:09:04.993979   68064 main.go:141] libmachine: (calico-893050) Creating domain...
	I0520 12:09:06.508049   68064 main.go:141] libmachine: (calico-893050) Waiting to get IP...
	I0520 12:09:06.508907   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:06.509388   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:06.509440   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:06.509370   68097 retry.go:31] will retry after 237.473095ms: waiting for machine to come up
	I0520 12:09:06.750217   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:06.750752   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:06.750774   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:06.750709   68097 retry.go:31] will retry after 249.57422ms: waiting for machine to come up
	I0520 12:09:07.001913   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:07.002376   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:07.002403   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:07.002344   68097 retry.go:31] will retry after 349.822406ms: waiting for machine to come up
	I0520 12:09:07.353982   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:07.354527   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:07.354556   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:07.354471   68097 retry.go:31] will retry after 523.007988ms: waiting for machine to come up
	I0520 12:09:08.278010   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:08.278443   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:08.278477   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:08.278411   68097 retry.go:31] will retry after 527.769844ms: waiting for machine to come up
	I0520 12:09:08.808359   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:08.808861   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:08.808891   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:08.808825   68097 retry.go:31] will retry after 690.629978ms: waiting for machine to come up
	I0520 12:09:07.911949   66550 out.go:204]   - Generating certificates and keys ...
	I0520 12:09:07.912092   66550 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 12:09:07.912197   66550 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 12:09:07.912294   66550 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 12:09:08.408683   66550 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 12:09:08.576015   66550 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 12:09:08.734292   66550 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 12:09:08.889948   66550 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 12:09:08.890282   66550 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-893050 localhost] and IPs [192.168.72.206 127.0.0.1 ::1]
	I0520 12:09:09.036425   66550 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 12:09:09.036700   66550 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-893050 localhost] and IPs [192.168.72.206 127.0.0.1 ::1]
	I0520 12:09:09.132684   66550 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 12:09:09.262816   66550 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 12:09:09.375961   66550 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 12:09:09.376290   66550 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 12:09:09.553349   66550 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 12:09:09.612521   66550 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 12:09:09.683741   66550 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 12:09:09.754560   66550 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 12:09:09.859111   66550 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 12:09:09.859742   66550 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 12:09:09.862269   66550 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 12:09:09.864176   66550 out.go:204]   - Booting up control plane ...
	I0520 12:09:09.864320   66550 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 12:09:09.864418   66550 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 12:09:09.864506   66550 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 12:09:09.882929   66550 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 12:09:09.884008   66550 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 12:09:09.884075   66550 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 12:09:10.016755   66550 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 12:09:10.016867   66550 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 12:09:10.518851   66550 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.733644ms
	I0520 12:09:10.518960   66550 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 12:09:08.691285   68503 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:09:08.691326   68503 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 12:09:08.691338   68503 cache.go:56] Caching tarball of preloaded images
	I0520 12:09:08.691427   68503 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:09:08.691441   68503 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:09:08.691552   68503 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/custom-flannel-893050/config.json ...
	I0520 12:09:08.691574   68503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/custom-flannel-893050/config.json: {Name:mk4db9710ba5e180cce79d3c2244f0b4aa5e5d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:09:08.691733   68503 start.go:360] acquireMachinesLock for custom-flannel-893050: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:09:09.501648   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:09.502053   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:09.502082   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:09.501997   68097 retry.go:31] will retry after 753.270877ms: waiting for machine to come up
	I0520 12:09:10.256998   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:10.257553   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:10.257581   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:10.257510   68097 retry.go:31] will retry after 1.013182071s: waiting for machine to come up
	I0520 12:09:11.272393   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:11.272911   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:11.272963   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:11.272857   68097 retry.go:31] will retry after 1.221457877s: waiting for machine to come up
	I0520 12:09:12.496285   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:12.496738   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:12.496769   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:12.496702   68097 retry.go:31] will retry after 1.462510628s: waiting for machine to come up
	I0520 12:09:13.961484   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:13.961969   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:13.962041   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:13.961946   68097 retry.go:31] will retry after 1.977967702s: waiting for machine to come up
	I0520 12:09:16.020436   66550 kubeadm.go:309] [api-check] The API server is healthy after 5.503633223s
	I0520 12:09:16.038875   66550 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 12:09:16.053383   66550 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 12:09:16.090372   66550 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 12:09:16.090646   66550 kubeadm.go:309] [mark-control-plane] Marking the node kindnet-893050 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 12:09:16.103451   66550 kubeadm.go:309] [bootstrap-token] Using token: aome0h.ltah3ei3t9ewjum3
	I0520 12:09:16.105107   66550 out.go:204]   - Configuring RBAC rules ...
	I0520 12:09:16.105244   66550 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 12:09:16.111921   66550 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 12:09:16.119692   66550 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 12:09:16.122642   66550 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 12:09:16.126021   66550 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 12:09:16.130125   66550 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 12:09:16.431346   66550 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 12:09:16.881510   66550 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 12:09:17.433846   66550 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 12:09:17.433867   66550 kubeadm.go:309] 
	I0520 12:09:17.433942   66550 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 12:09:17.433953   66550 kubeadm.go:309] 
	I0520 12:09:17.434050   66550 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 12:09:17.434062   66550 kubeadm.go:309] 
	I0520 12:09:17.434106   66550 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 12:09:17.434189   66550 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 12:09:17.434254   66550 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 12:09:17.434261   66550 kubeadm.go:309] 
	I0520 12:09:17.434310   66550 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 12:09:17.434316   66550 kubeadm.go:309] 
	I0520 12:09:17.434360   66550 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 12:09:17.434366   66550 kubeadm.go:309] 
	I0520 12:09:17.434410   66550 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 12:09:17.434478   66550 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 12:09:17.434535   66550 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 12:09:17.434542   66550 kubeadm.go:309] 
	I0520 12:09:17.434614   66550 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 12:09:17.434711   66550 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 12:09:17.434723   66550 kubeadm.go:309] 
	I0520 12:09:17.434837   66550 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token aome0h.ltah3ei3t9ewjum3 \
	I0520 12:09:17.434968   66550 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 12:09:17.434992   66550 kubeadm.go:309] 	--control-plane 
	I0520 12:09:17.434996   66550 kubeadm.go:309] 
	I0520 12:09:17.435081   66550 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 12:09:17.435092   66550 kubeadm.go:309] 
	I0520 12:09:17.435197   66550 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token aome0h.ltah3ei3t9ewjum3 \
	I0520 12:09:17.435337   66550 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 12:09:17.436025   66550 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 12:09:17.436065   66550 cni.go:84] Creating CNI manager for "kindnet"
	I0520 12:09:17.437570   66550 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0520 12:09:15.941186   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:15.941687   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:15.941719   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:15.941612   68097 retry.go:31] will retry after 2.9047725s: waiting for machine to come up
	I0520 12:09:18.849638   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:18.850102   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:18.850129   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:18.850036   68097 retry.go:31] will retry after 3.086134845s: waiting for machine to come up
	I0520 12:09:17.438598   66550 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0520 12:09:17.446389   66550 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0520 12:09:17.446408   66550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0520 12:09:17.468513   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0520 12:09:17.748471   66550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 12:09:17.748566   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:17.748573   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-893050 minikube.k8s.io/updated_at=2024_05_20T12_09_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=kindnet-893050 minikube.k8s.io/primary=true
	I0520 12:09:17.770630   66550 ops.go:34] apiserver oom_adj: -16
	I0520 12:09:17.909307   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:18.409431   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:18.909976   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:19.410141   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:19.909943   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:20.409416   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:20.909441   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:21.937614   68064 main.go:141] libmachine: (calico-893050) DBG | domain calico-893050 has defined MAC address 52:54:00:39:96:16 in network mk-calico-893050
	I0520 12:09:21.938159   68064 main.go:141] libmachine: (calico-893050) DBG | unable to find current IP address of domain calico-893050 in network mk-calico-893050
	I0520 12:09:21.938194   68064 main.go:141] libmachine: (calico-893050) DBG | I0520 12:09:21.938125   68097 retry.go:31] will retry after 4.695988547s: waiting for machine to come up
	I0520 12:09:21.409351   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:21.909864   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:22.410123   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:22.910210   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:23.410145   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:23.909385   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:24.410287   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:24.910116   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:25.410146   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:25.910218   66550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 12:09:28.089747   68503 start.go:364] duration metric: took 19.397948328s to acquireMachinesLock for "custom-flannel-893050"
	I0520 12:09:28.089855   68503 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-893050 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-893050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:09:28.089960   68503 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 12:09:28.091915   68503 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0520 12:09:28.092096   68503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:09:28.092168   68503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:09:28.108834   68503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38659
	I0520 12:09:28.109210   68503 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:09:28.109750   68503 main.go:141] libmachine: Using API Version  1
	I0520 12:09:28.109772   68503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:09:28.110204   68503 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:09:28.110424   68503 main.go:141] libmachine: (custom-flannel-893050) Calling .GetMachineName
	I0520 12:09:28.110598   68503 main.go:141] libmachine: (custom-flannel-893050) Calling .DriverName
	I0520 12:09:28.110785   68503 start.go:159] libmachine.API.Create for "custom-flannel-893050" (driver="kvm2")
	I0520 12:09:28.110818   68503 client.go:168] LocalClient.Create starting
	I0520 12:09:28.110853   68503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 12:09:28.110887   68503 main.go:141] libmachine: Decoding PEM data...
	I0520 12:09:28.110902   68503 main.go:141] libmachine: Parsing certificate...
	I0520 12:09:28.110960   68503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 12:09:28.110977   68503 main.go:141] libmachine: Decoding PEM data...
	I0520 12:09:28.110991   68503 main.go:141] libmachine: Parsing certificate...
	I0520 12:09:28.111006   68503 main.go:141] libmachine: Running pre-create checks...
	I0520 12:09:28.111019   68503 main.go:141] libmachine: (custom-flannel-893050) Calling .PreCreateCheck
	I0520 12:09:28.111417   68503 main.go:141] libmachine: (custom-flannel-893050) Calling .GetConfigRaw
	I0520 12:09:28.111795   68503 main.go:141] libmachine: Creating machine...
	I0520 12:09:28.111810   68503 main.go:141] libmachine: (custom-flannel-893050) Calling .Create
	I0520 12:09:28.111937   68503 main.go:141] libmachine: (custom-flannel-893050) Creating KVM machine...
	I0520 12:09:28.113179   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | found existing default KVM network
	I0520 12:09:28.115066   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | I0520 12:09:28.114891   68658 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0d0}
	I0520 12:09:28.115099   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | created network xml: 
	I0520 12:09:28.115226   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | <network>
	I0520 12:09:28.115240   68503 main.go:141] libmachine: (custom-flannel-893050) DBG |   <name>mk-custom-flannel-893050</name>
	I0520 12:09:28.115251   68503 main.go:141] libmachine: (custom-flannel-893050) DBG |   <dns enable='no'/>
	I0520 12:09:28.115266   68503 main.go:141] libmachine: (custom-flannel-893050) DBG |   
	I0520 12:09:28.115278   68503 main.go:141] libmachine: (custom-flannel-893050) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0520 12:09:28.115289   68503 main.go:141] libmachine: (custom-flannel-893050) DBG |     <dhcp>
	I0520 12:09:28.115301   68503 main.go:141] libmachine: (custom-flannel-893050) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0520 12:09:28.115313   68503 main.go:141] libmachine: (custom-flannel-893050) DBG |     </dhcp>
	I0520 12:09:28.115321   68503 main.go:141] libmachine: (custom-flannel-893050) DBG |   </ip>
	I0520 12:09:28.115333   68503 main.go:141] libmachine: (custom-flannel-893050) DBG |   
	I0520 12:09:28.115344   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | </network>
	I0520 12:09:28.115363   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | 
	I0520 12:09:28.120891   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | trying to create private KVM network mk-custom-flannel-893050 192.168.39.0/24...
	I0520 12:09:28.196803   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | private KVM network mk-custom-flannel-893050 192.168.39.0/24 created
	I0520 12:09:28.196838   68503 main.go:141] libmachine: (custom-flannel-893050) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/custom-flannel-893050 ...
	I0520 12:09:28.196857   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | I0520 12:09:28.196720   68658 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 12:09:28.196896   68503 main.go:141] libmachine: (custom-flannel-893050) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:09:28.196923   68503 main.go:141] libmachine: (custom-flannel-893050) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:09:28.434211   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | I0520 12:09:28.434035   68658 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/custom-flannel-893050/id_rsa...
	I0520 12:09:28.581004   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | I0520 12:09:28.580886   68658 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/custom-flannel-893050/custom-flannel-893050.rawdisk...
	I0520 12:09:28.581108   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | Writing magic tar header
	I0520 12:09:28.581138   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | Writing SSH key tar header
	I0520 12:09:28.581305   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | I0520 12:09:28.581240   68658 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/custom-flannel-893050 ...
	I0520 12:09:28.581462   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/custom-flannel-893050
	I0520 12:09:28.581514   68503 main.go:141] libmachine: (custom-flannel-893050) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/custom-flannel-893050 (perms=drwx------)
	I0520 12:09:28.581593   68503 main.go:141] libmachine: (custom-flannel-893050) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:09:28.581607   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 12:09:28.581619   68503 main.go:141] libmachine: (custom-flannel-893050) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 12:09:28.581636   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 12:09:28.581646   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 12:09:28.581660   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:09:28.581670   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:09:28.581679   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | Checking permissions on dir: /home
	I0520 12:09:28.581695   68503 main.go:141] libmachine: (custom-flannel-893050) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 12:09:28.581708   68503 main.go:141] libmachine: (custom-flannel-893050) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:09:28.581718   68503 main.go:141] libmachine: (custom-flannel-893050) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:09:28.581730   68503 main.go:141] libmachine: (custom-flannel-893050) Creating domain...
	I0520 12:09:28.581743   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | Skipping /home - not owner
	I0520 12:09:28.582636   68503 main.go:141] libmachine: (custom-flannel-893050) define libvirt domain using xml: 
	I0520 12:09:28.582654   68503 main.go:141] libmachine: (custom-flannel-893050) <domain type='kvm'>
	I0520 12:09:28.582661   68503 main.go:141] libmachine: (custom-flannel-893050)   <name>custom-flannel-893050</name>
	I0520 12:09:28.582666   68503 main.go:141] libmachine: (custom-flannel-893050)   <memory unit='MiB'>3072</memory>
	I0520 12:09:28.582672   68503 main.go:141] libmachine: (custom-flannel-893050)   <vcpu>2</vcpu>
	I0520 12:09:28.582676   68503 main.go:141] libmachine: (custom-flannel-893050)   <features>
	I0520 12:09:28.582685   68503 main.go:141] libmachine: (custom-flannel-893050)     <acpi/>
	I0520 12:09:28.582692   68503 main.go:141] libmachine: (custom-flannel-893050)     <apic/>
	I0520 12:09:28.582697   68503 main.go:141] libmachine: (custom-flannel-893050)     <pae/>
	I0520 12:09:28.582709   68503 main.go:141] libmachine: (custom-flannel-893050)     
	I0520 12:09:28.582715   68503 main.go:141] libmachine: (custom-flannel-893050)   </features>
	I0520 12:09:28.582727   68503 main.go:141] libmachine: (custom-flannel-893050)   <cpu mode='host-passthrough'>
	I0520 12:09:28.582735   68503 main.go:141] libmachine: (custom-flannel-893050)   
	I0520 12:09:28.582747   68503 main.go:141] libmachine: (custom-flannel-893050)   </cpu>
	I0520 12:09:28.582761   68503 main.go:141] libmachine: (custom-flannel-893050)   <os>
	I0520 12:09:28.582773   68503 main.go:141] libmachine: (custom-flannel-893050)     <type>hvm</type>
	I0520 12:09:28.582782   68503 main.go:141] libmachine: (custom-flannel-893050)     <boot dev='cdrom'/>
	I0520 12:09:28.582796   68503 main.go:141] libmachine: (custom-flannel-893050)     <boot dev='hd'/>
	I0520 12:09:28.582805   68503 main.go:141] libmachine: (custom-flannel-893050)     <bootmenu enable='no'/>
	I0520 12:09:28.582811   68503 main.go:141] libmachine: (custom-flannel-893050)   </os>
	I0520 12:09:28.582819   68503 main.go:141] libmachine: (custom-flannel-893050)   <devices>
	I0520 12:09:28.582827   68503 main.go:141] libmachine: (custom-flannel-893050)     <disk type='file' device='cdrom'>
	I0520 12:09:28.582847   68503 main.go:141] libmachine: (custom-flannel-893050)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/custom-flannel-893050/boot2docker.iso'/>
	I0520 12:09:28.582856   68503 main.go:141] libmachine: (custom-flannel-893050)       <target dev='hdc' bus='scsi'/>
	I0520 12:09:28.582866   68503 main.go:141] libmachine: (custom-flannel-893050)       <readonly/>
	I0520 12:09:28.582876   68503 main.go:141] libmachine: (custom-flannel-893050)     </disk>
	I0520 12:09:28.582886   68503 main.go:141] libmachine: (custom-flannel-893050)     <disk type='file' device='disk'>
	I0520 12:09:28.582896   68503 main.go:141] libmachine: (custom-flannel-893050)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:09:28.582907   68503 main.go:141] libmachine: (custom-flannel-893050)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/custom-flannel-893050/custom-flannel-893050.rawdisk'/>
	I0520 12:09:28.582912   68503 main.go:141] libmachine: (custom-flannel-893050)       <target dev='hda' bus='virtio'/>
	I0520 12:09:28.582925   68503 main.go:141] libmachine: (custom-flannel-893050)     </disk>
	I0520 12:09:28.582930   68503 main.go:141] libmachine: (custom-flannel-893050)     <interface type='network'>
	I0520 12:09:28.582938   68503 main.go:141] libmachine: (custom-flannel-893050)       <source network='mk-custom-flannel-893050'/>
	I0520 12:09:28.582946   68503 main.go:141] libmachine: (custom-flannel-893050)       <model type='virtio'/>
	I0520 12:09:28.582951   68503 main.go:141] libmachine: (custom-flannel-893050)     </interface>
	I0520 12:09:28.582956   68503 main.go:141] libmachine: (custom-flannel-893050)     <interface type='network'>
	I0520 12:09:28.582961   68503 main.go:141] libmachine: (custom-flannel-893050)       <source network='default'/>
	I0520 12:09:28.582965   68503 main.go:141] libmachine: (custom-flannel-893050)       <model type='virtio'/>
	I0520 12:09:28.582970   68503 main.go:141] libmachine: (custom-flannel-893050)     </interface>
	I0520 12:09:28.582973   68503 main.go:141] libmachine: (custom-flannel-893050)     <serial type='pty'>
	I0520 12:09:28.582978   68503 main.go:141] libmachine: (custom-flannel-893050)       <target port='0'/>
	I0520 12:09:28.582982   68503 main.go:141] libmachine: (custom-flannel-893050)     </serial>
	I0520 12:09:28.582987   68503 main.go:141] libmachine: (custom-flannel-893050)     <console type='pty'>
	I0520 12:09:28.582992   68503 main.go:141] libmachine: (custom-flannel-893050)       <target type='serial' port='0'/>
	I0520 12:09:28.582996   68503 main.go:141] libmachine: (custom-flannel-893050)     </console>
	I0520 12:09:28.583000   68503 main.go:141] libmachine: (custom-flannel-893050)     <rng model='virtio'>
	I0520 12:09:28.583006   68503 main.go:141] libmachine: (custom-flannel-893050)       <backend model='random'>/dev/random</backend>
	I0520 12:09:28.583010   68503 main.go:141] libmachine: (custom-flannel-893050)     </rng>
	I0520 12:09:28.583014   68503 main.go:141] libmachine: (custom-flannel-893050)     
	I0520 12:09:28.583020   68503 main.go:141] libmachine: (custom-flannel-893050)     
	I0520 12:09:28.583025   68503 main.go:141] libmachine: (custom-flannel-893050)   </devices>
	I0520 12:09:28.583034   68503 main.go:141] libmachine: (custom-flannel-893050) </domain>
	I0520 12:09:28.583041   68503 main.go:141] libmachine: (custom-flannel-893050) 
	I0520 12:09:28.587678   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | domain custom-flannel-893050 has defined MAC address 52:54:00:a0:dd:7c in network default
	I0520 12:09:28.588266   68503 main.go:141] libmachine: (custom-flannel-893050) DBG | domain custom-flannel-893050 has defined MAC address 52:54:00:eb:f8:88 in network mk-custom-flannel-893050
	I0520 12:09:28.588299   68503 main.go:141] libmachine: (custom-flannel-893050) Ensuring networks are active...
	I0520 12:09:28.589117   68503 main.go:141] libmachine: (custom-flannel-893050) Ensuring network default is active
	I0520 12:09:28.589487   68503 main.go:141] libmachine: (custom-flannel-893050) Ensuring network mk-custom-flannel-893050 is active
	I0520 12:09:28.590359   68503 main.go:141] libmachine: (custom-flannel-893050) Getting domain xml...
	I0520 12:09:28.591345   68503 main.go:141] libmachine: (custom-flannel-893050) Creating domain...
	
	
	==> CRI-O <==
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.625522497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c538f1d-d543-4d12-a415-da63a580d8b0 name=/runtime.v1.RuntimeService/Version
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.627534392Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=863d4264-ee3f-400b-9660-481ebe78ea5c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.628262905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206969628227159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=863d4264-ee3f-400b-9660-481ebe78ea5c name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.629304638Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f4f85ca-07ec-42dc-b94c-5e83ed98933a name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.629377368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f4f85ca-07ec-42dc-b94c-5e83ed98933a name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.629804891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205647958668257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a985fe79cee321de10b093785894484dea4774ac9b5f355dd06ebe57c17827,PodSandboxId:c58ca9cb975c25d98e7acc285319cf56bc4e55366e2f9b841463232d500fe1d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205627683143010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a6a1506-9e75-423a-adb9-274636a21dbc,},Annotations:map[string]string{io.kubernetes.container.hash: 58a7b55b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5,PodSandboxId:612194bb3c8cc600f93c1dca2e55ad9a08838af182bd42a3bb18e9331e126220,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205624851486132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x92p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241b883a-458a-4b16-99b3-03b91aec4342,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a0dc0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b,PodSandboxId:e75c8115e82907cd6cb78494eb19b9fbf5d87eb0fcbe6338b492117fb5a3dc56,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205617301716263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7ddtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50634b-c
e22-460b-a740-ed81a498ce63,},Annotations:map[string]string{io.kubernetes.container.hash: 80f9d4ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205617133769816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696
-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b,PodSandboxId:af86bdd9111b866df79c4a59bae6ab8fd55cdc56840863a420bcd04e0f2e85c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205612483128514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5027d9df0407cbbdbc0516584af670,},Annotations:map[
string]string{io.kubernetes.container.hash: c7679dfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0,PodSandboxId:e23f1ec5baab95f7e522cabf1bad9a571a492375491479b2721fa3311f9ae442,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205612469143347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb9cc3efa4f38e9f9e5600bc29a6c58,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 88a7cee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2,PodSandboxId:d8a3ed4cefcf516b0231aa18d63f5f2cfde7c20a2f13e9c868287f24db1d8d48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205612410212165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228cccb5a0212efb1b03f4774343
57bd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d,PodSandboxId:f88fbb0be85d45058ceed82bb421d053483d6bb29b599d4c1f953393421ff13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205612354696245,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7208c5b7d40f8aa3c03c1126d6cb91
38,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f4f85ca-07ec-42dc-b94c-5e83ed98933a name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.674082833Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddf7c35c-82b4-49b4-9a73-a7edb67af117 name=/runtime.v1.RuntimeService/Version
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.674177235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddf7c35c-82b4-49b4-9a73-a7edb67af117 name=/runtime.v1.RuntimeService/Version
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.675543950Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8821952e-6f75-411b-b05f-1732c9849b3b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.675926590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206969675908683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8821952e-6f75-411b-b05f-1732c9849b3b name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.676650961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eacc018b-081d-41b7-977e-e3cb636f4ca6 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.676725598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eacc018b-081d-41b7-977e-e3cb636f4ca6 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.677027438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205647958668257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a985fe79cee321de10b093785894484dea4774ac9b5f355dd06ebe57c17827,PodSandboxId:c58ca9cb975c25d98e7acc285319cf56bc4e55366e2f9b841463232d500fe1d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205627683143010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a6a1506-9e75-423a-adb9-274636a21dbc,},Annotations:map[string]string{io.kubernetes.container.hash: 58a7b55b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5,PodSandboxId:612194bb3c8cc600f93c1dca2e55ad9a08838af182bd42a3bb18e9331e126220,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205624851486132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x92p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241b883a-458a-4b16-99b3-03b91aec4342,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a0dc0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b,PodSandboxId:e75c8115e82907cd6cb78494eb19b9fbf5d87eb0fcbe6338b492117fb5a3dc56,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205617301716263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7ddtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50634b-c
e22-460b-a740-ed81a498ce63,},Annotations:map[string]string{io.kubernetes.container.hash: 80f9d4ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205617133769816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696
-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b,PodSandboxId:af86bdd9111b866df79c4a59bae6ab8fd55cdc56840863a420bcd04e0f2e85c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205612483128514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5027d9df0407cbbdbc0516584af670,},Annotations:map[
string]string{io.kubernetes.container.hash: c7679dfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0,PodSandboxId:e23f1ec5baab95f7e522cabf1bad9a571a492375491479b2721fa3311f9ae442,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205612469143347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb9cc3efa4f38e9f9e5600bc29a6c58,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 88a7cee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2,PodSandboxId:d8a3ed4cefcf516b0231aa18d63f5f2cfde7c20a2f13e9c868287f24db1d8d48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205612410212165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228cccb5a0212efb1b03f4774343
57bd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d,PodSandboxId:f88fbb0be85d45058ceed82bb421d053483d6bb29b599d4c1f953393421ff13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205612354696245,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7208c5b7d40f8aa3c03c1126d6cb91
38,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eacc018b-081d-41b7-977e-e3cb636f4ca6 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.729052274Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=998b8a0b-9f37-437e-87e1-fdf50c1c7156 name=/runtime.v1.RuntimeService/Version
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.729169413Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=998b8a0b-9f37-437e-87e1-fdf50c1c7156 name=/runtime.v1.RuntimeService/Version
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.731896226Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4143a620-fedb-4a34-832a-12952e5e2eaf name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.733070790Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206969733029950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4143a620-fedb-4a34-832a-12952e5e2eaf name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.735160156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a888961-3ea0-428c-b4f7-a37bc93ea53f name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.735248692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a888961-3ea0-428c-b4f7-a37bc93ea53f name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.735529245Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205647958668257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a985fe79cee321de10b093785894484dea4774ac9b5f355dd06ebe57c17827,PodSandboxId:c58ca9cb975c25d98e7acc285319cf56bc4e55366e2f9b841463232d500fe1d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205627683143010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a6a1506-9e75-423a-adb9-274636a21dbc,},Annotations:map[string]string{io.kubernetes.container.hash: 58a7b55b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5,PodSandboxId:612194bb3c8cc600f93c1dca2e55ad9a08838af182bd42a3bb18e9331e126220,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205624851486132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x92p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241b883a-458a-4b16-99b3-03b91aec4342,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a0dc0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b,PodSandboxId:e75c8115e82907cd6cb78494eb19b9fbf5d87eb0fcbe6338b492117fb5a3dc56,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205617301716263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7ddtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50634b-c
e22-460b-a740-ed81a498ce63,},Annotations:map[string]string{io.kubernetes.container.hash: 80f9d4ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716205617133769816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696
-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b,PodSandboxId:af86bdd9111b866df79c4a59bae6ab8fd55cdc56840863a420bcd04e0f2e85c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205612483128514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5027d9df0407cbbdbc0516584af670,},Annotations:map[
string]string{io.kubernetes.container.hash: c7679dfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0,PodSandboxId:e23f1ec5baab95f7e522cabf1bad9a571a492375491479b2721fa3311f9ae442,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205612469143347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb9cc3efa4f38e9f9e5600bc29a6c58,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 88a7cee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2,PodSandboxId:d8a3ed4cefcf516b0231aa18d63f5f2cfde7c20a2f13e9c868287f24db1d8d48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205612410212165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228cccb5a0212efb1b03f4774343
57bd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d,PodSandboxId:f88fbb0be85d45058ceed82bb421d053483d6bb29b599d4c1f953393421ff13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205612354696245,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7208c5b7d40f8aa3c03c1126d6cb91
38,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a888961-3ea0-428c-b4f7-a37bc93ea53f name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.742007084Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92465e82-a7de-4284-a435-6392409703ad name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.744622429Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:612194bb3c8cc600f93c1dca2e55ad9a08838af182bd42a3bb18e9331e126220,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-x92p9,Uid:241b883a-458a-4b16-99b3-03b91aec4342,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205624553139633,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-x92p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241b883a-458a-4b16-99b3-03b91aec4342,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T11:46:56.694024920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c58ca9cb975c25d98e7acc285319cf56bc4e55366e2f9b841463232d500fe1d2,Metadata:&PodSandboxMetadata{Name:busybox,Uid:6a6a1506-9e75-423a-adb9-274636a21dbc,Namespace:default,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1716205624543673987,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a6a1506-9e75-423a-adb9-274636a21dbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T11:46:56.694039008Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be841c27dfa6bd421efd46cc0a67fb5bbd69e15e5d5942e7c53b2c5772037576,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-cdjmz,Uid:532bffab-0dd1-4097-b283-cb48a3010e23,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205622751503120,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-cdjmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 532bffab-0dd1-4097-b283-cb48a3010e23,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20
T11:46:56.694044777Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e75c8115e82907cd6cb78494eb19b9fbf5d87eb0fcbe6338b492117fb5a3dc56,Metadata:&PodSandboxMetadata{Name:kube-proxy-7ddtk,Uid:6b50634b-ce22-460b-a740-ed81a498ce63,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205617011116876,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7ddtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50634b-ce22-460b-a740-ed81a498ce63,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-20T11:46:56.694034069Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:07fd4b29-3c5f-4437-a696-1a3ec55d5285,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205617008075092,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696-1a3ec55d5285,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-05-20T11:46:56.694035622Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af86bdd9111b866df79c4a59bae6ab8fd55cdc56840863a420bcd04e0f2e85c4,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-668445,Uid:9d5027d9df0407cbbdbc0516584af670,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205612208086783,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5027d9df0407cbbdbc0516584af670,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.221:2379,kubernetes.io/config.hash: 9d5027d9df0407cbbdbc0516584af670,kubernetes.io/config.seen: 2024-05-20T11:46:51.744621080Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e23f1ec5baab95f7e522cabf1bad9a571a492375491479b2721fa3311f9ae442,Metadata:&PodSandboxMetadata{Name:
kube-apiserver-default-k8s-diff-port-668445,Uid:1cb9cc3efa4f38e9f9e5600bc29a6c58,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205612196817533,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb9cc3efa4f38e9f9e5600bc29a6c58,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.221:8444,kubernetes.io/config.hash: 1cb9cc3efa4f38e9f9e5600bc29a6c58,kubernetes.io/config.seen: 2024-05-20T11:46:51.691557501Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d8a3ed4cefcf516b0231aa18d63f5f2cfde7c20a2f13e9c868287f24db1d8d48,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-668445,Uid:228cccb5a0212efb1b03f477434357bd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205612190402389,Labels:map[s
tring]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228cccb5a0212efb1b03f477434357bd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 228cccb5a0212efb1b03f477434357bd,kubernetes.io/config.seen: 2024-05-20T11:46:51.691558733Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f88fbb0be85d45058ceed82bb421d053483d6bb29b599d4c1f953393421ff13b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-668445,Uid:7208c5b7d40f8aa3c03c1126d6cb9138,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716205612179720081,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7208c5b7d40f8aa3c03c1126d6cb9138,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: 7208c5b7d40f8aa3c03c1126d6cb9138,kubernetes.io/config.seen: 2024-05-20T11:46:51.691553489Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=92465e82-a7de-4284-a435-6392409703ad name=/runtime.v1.RuntimeService/ListPodSandbox
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.747729266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af3689e4-9af1-4d2e-9fbb-0c961c317f7f name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.747811685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af3689e4-9af1-4d2e-9fbb-0c961c317f7f name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:09:29 default-k8s-diff-port-668445 crio[732]: time="2024-05-20 12:09:29.748200941Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9,PodSandboxId:b8b38f0d9fc5983235fd01d0307ec93f50eaba147ddad8334e753f07b69511ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205647958668257,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fd4b29-3c5f-4437-a696-1a3ec55d5285,},Annotations:map[string]string{io.kubernetes.container.hash: d68f6a87,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a985fe79cee321de10b093785894484dea4774ac9b5f355dd06ebe57c17827,PodSandboxId:c58ca9cb975c25d98e7acc285319cf56bc4e55366e2f9b841463232d500fe1d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716205627683143010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6a6a1506-9e75-423a-adb9-274636a21dbc,},Annotations:map[string]string{io.kubernetes.container.hash: 58a7b55b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5,PodSandboxId:612194bb3c8cc600f93c1dca2e55ad9a08838af182bd42a3bb18e9331e126220,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205624851486132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x92p9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 241b883a-458a-4b16-99b3-03b91aec4342,},Annotations:map[string]string{io.kubernetes.container.hash: fd3a0dc0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b,PodSandboxId:e75c8115e82907cd6cb78494eb19b9fbf5d87eb0fcbe6338b492117fb5a3dc56,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716205617301716263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7ddtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b50634b-c
e22-460b-a740-ed81a498ce63,},Annotations:map[string]string{io.kubernetes.container.hash: 80f9d4ea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b,PodSandboxId:af86bdd9111b866df79c4a59bae6ab8fd55cdc56840863a420bcd04e0f2e85c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205612483128514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d5027d9df0407cbbdbc0516584af670,},Ann
otations:map[string]string{io.kubernetes.container.hash: c7679dfc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0,PodSandboxId:e23f1ec5baab95f7e522cabf1bad9a571a492375491479b2721fa3311f9ae442,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205612469143347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cb9cc3efa4f38e9f9e5600bc29a6c58,},Annot
ations:map[string]string{io.kubernetes.container.hash: 88a7cee6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2,PodSandboxId:d8a3ed4cefcf516b0231aa18d63f5f2cfde7c20a2f13e9c868287f24db1d8d48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205612410212165,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 228cccb5a0212ef
b1b03f477434357bd,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d,PodSandboxId:f88fbb0be85d45058ceed82bb421d053483d6bb29b599d4c1f953393421ff13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716205612354696245,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-668445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7208c5b7d40f8aa3c
03c1126d6cb9138,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af3689e4-9af1-4d2e-9fbb-0c961c317f7f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5555dedde072c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       2                   b8b38f0d9fc59       storage-provisioner
	66a985fe79cee       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   c58ca9cb975c2       busybox
	3fc749b082254       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   612194bb3c8cc       coredns-7db6d8ff4d-x92p9
	03a8b6ec44d3a       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      22 minutes ago      Running             kube-proxy                1                   e75c8115e8290       kube-proxy-7ddtk
	254a32d2ee69b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   b8b38f0d9fc59       storage-provisioner
	82682648d9acf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      22 minutes ago      Running             etcd                      1                   af86bdd9111b8       etcd-default-k8s-diff-port-668445
	c254befba5c81       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      22 minutes ago      Running             kube-apiserver            1                   e23f1ec5baab9       kube-apiserver-default-k8s-diff-port-668445
	78120ead890c0       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      22 minutes ago      Running             kube-controller-manager   1                   d8a3ed4cefcf5       kube-controller-manager-default-k8s-diff-port-668445
	7d443d30c89f0       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      22 minutes ago      Running             kube-scheduler            1                   f88fbb0be85d4       kube-scheduler-default-k8s-diff-port-668445
	
	
	==> coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48005 - 25047 "HINFO IN 4496369672220611389.2769398303541995984. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011848464s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-668445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-668445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=default-k8s-diff-port-668445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T11_40_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:40:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-668445
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:09:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:07:50 +0000   Mon, 20 May 2024 11:40:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:07:50 +0000   Mon, 20 May 2024 11:40:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:07:50 +0000   Mon, 20 May 2024 11:40:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:07:50 +0000   Mon, 20 May 2024 11:47:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.221
	  Hostname:    default-k8s-diff-port-668445
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1236b9c875b407ebb2860324b3b9a70
	  System UUID:                f1236b9c-875b-407e-bb28-60324b3b9a70
	  Boot ID:                    1caa0d46-e61a-473b-8b88-a9e0961ee5d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-x92p9                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-668445                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-668445             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-668445    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-7ddtk                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-668445             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-cdjmz                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-668445 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-668445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-668445 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-668445 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-668445 event: Registered Node default-k8s-diff-port-668445 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-668445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-668445 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-668445 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-668445 event: Registered Node default-k8s-diff-port-668445 in Controller
	
	
	==> dmesg <==
	[May20 11:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055883] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043727] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.682305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.497684] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.643593] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.586986] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.056080] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070230] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.196498] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.156427] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.305624] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +4.546332] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.063986] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.834246] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +5.585730] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.456168] systemd-fstab-generator[1554]: Ignoring "noauto" option for root device
	[May20 11:47] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.448553] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] <==
	{"level":"info","ts":"2024-05-20T11:46:54.637109Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T11:46:54.637269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:46:54.639668Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.221:2379"}
	{"level":"info","ts":"2024-05-20T11:46:54.639689Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T11:47:08.030214Z","caller":"traceutil/trace.go:171","msg":"trace[1386647512] linearizableReadLoop","detail":"{readStateIndex:622; appliedIndex:621; }","duration":"163.907753ms","start":"2024-05-20T11:47:07.866256Z","end":"2024-05-20T11:47:08.030164Z","steps":["trace[1386647512] 'read index received'  (duration: 163.660935ms)","trace[1386647512] 'applied index is now lower than readState.Index'  (duration: 246.108µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T11:47:08.030654Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.238378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-668445\" ","response":"range_response_count:1 size:5536"}
	{"level":"info","ts":"2024-05-20T11:47:08.030767Z","caller":"traceutil/trace.go:171","msg":"trace[976766791] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-668445; range_end:; response_count:1; response_revision:580; }","duration":"164.533741ms","start":"2024-05-20T11:47:07.866218Z","end":"2024-05-20T11:47:08.030752Z","steps":["trace[976766791] 'agreement among raft nodes before linearized reading'  (duration: 164.185489ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:47:08.030821Z","caller":"traceutil/trace.go:171","msg":"trace[428608829] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"170.470419ms","start":"2024-05-20T11:47:07.860327Z","end":"2024-05-20T11:47:08.030798Z","steps":["trace[428608829] 'process raft request'  (duration: 169.723152ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:47:11.546033Z","caller":"traceutil/trace.go:171","msg":"trace[749986633] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"123.151428ms","start":"2024-05-20T11:47:11.422478Z","end":"2024-05-20T11:47:11.54563Z","steps":["trace[749986633] 'process raft request'  (duration: 122.79367ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-20T11:56:54.666999Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":832}
	{"level":"info","ts":"2024-05-20T11:56:54.678177Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":832,"took":"10.846272ms","hash":1978745169,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2691072,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-05-20T11:56:54.678238Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1978745169,"revision":832,"compact-revision":-1}
	{"level":"info","ts":"2024-05-20T12:01:54.675414Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1075}
	{"level":"info","ts":"2024-05-20T12:01:54.691463Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1075,"took":"15.689423ms","hash":2016420238,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1601536,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-05-20T12:01:54.691508Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2016420238,"revision":1075,"compact-revision":832}
	{"level":"info","ts":"2024-05-20T12:06:54.683636Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1317}
	{"level":"info","ts":"2024-05-20T12:06:54.688155Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1317,"took":"3.719772ms","hash":3254350129,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-05-20T12:06:54.688235Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3254350129,"revision":1317,"compact-revision":1075}
	{"level":"warn","ts":"2024-05-20T12:07:07.788171Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.349611ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10307771520804478549 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.221\" mod_revision:1564 > success:<request_put:<key:\"/registry/masterleases/192.168.61.221\" value_size:67 lease:1084399483949702737 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.221\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-05-20T12:07:07.788459Z","caller":"traceutil/trace.go:171","msg":"trace[298270596] transaction","detail":"{read_only:false; response_revision:1572; number_of_response:1; }","duration":"257.093398ms","start":"2024-05-20T12:07:07.531315Z","end":"2024-05-20T12:07:07.788408Z","steps":["trace[298270596] 'process raft request'  (duration: 123.977449ms)","trace[298270596] 'compare'  (duration: 131.095857ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-20T12:07:11.059584Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-20T12:07:10.745405Z","time spent":"314.164199ms","remote":"127.0.0.1:46378","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-05-20T12:07:32.890904Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.948499ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10307771520804478678 > lease_revoke:<id:0f0c8f95d3705e83>","response":"size:27"}
	{"level":"info","ts":"2024-05-20T12:09:06.245769Z","caller":"traceutil/trace.go:171","msg":"trace[1781006613] transaction","detail":"{read_only:false; response_revision:1670; number_of_response:1; }","duration":"127.064082ms","start":"2024-05-20T12:09:06.118664Z","end":"2024-05-20T12:09:06.245728Z","steps":["trace[1781006613] 'process raft request'  (duration: 126.973404ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-20T12:09:07.754513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.268982ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10307771520804479136 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.221\" mod_revision:1663 > success:<request_put:<key:\"/registry/masterleases/192.168.61.221\" value_size:68 lease:1084399483949703326 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.221\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-05-20T12:09:07.754712Z","caller":"traceutil/trace.go:171","msg":"trace[834166168] transaction","detail":"{read_only:false; response_revision:1671; number_of_response:1; }","duration":"259.980404ms","start":"2024-05-20T12:09:07.494704Z","end":"2024-05-20T12:09:07.754684Z","steps":["trace[834166168] 'process raft request'  (duration: 64.388922ms)","trace[834166168] 'compare'  (duration: 195.122324ms)"],"step_count":2}
	
	
	==> kernel <==
	 12:09:30 up 23 min,  0 users,  load average: 0.18, 0.13, 0.10
	Linux default-k8s-diff-port-668445 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] <==
	I0520 12:02:57.077246       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:04:57.075636       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:04:57.075757       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 12:04:57.075765       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:04:57.077898       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:04:57.078034       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 12:04:57.078062       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:06:56.077041       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:06:56.077140       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0520 12:06:57.078190       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:06:57.078294       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 12:06:57.078306       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:06:57.078333       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:06:57.078401       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 12:06:57.079679       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:07:57.078562       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:07:57.078891       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 12:07:57.078926       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:07:57.080358       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:07:57.080456       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 12:07:57.080465       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] <==
	I0520 12:03:39.881758       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:04:09.377779       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:04:09.892149       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:04:39.382719       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:04:39.899248       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:05:09.388344       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:05:09.907514       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:05:39.394574       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:05:39.915689       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:06:09.400348       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:06:09.924423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:06:39.405901       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:06:39.932481       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:07:09.413759       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:07:09.942454       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:07:39.419014       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:07:39.951614       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:08:09.425510       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:08:09.962190       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0520 12:08:20.760403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="227.908µs"
	I0520 12:08:33.762240       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="68.585µs"
	E0520 12:08:39.431249       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:08:39.970346       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:09:09.436071       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:09:09.977382       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] <==
	I0520 11:46:57.458834       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:46:57.472782       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.221"]
	I0520 11:46:57.508884       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:46:57.508922       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:46:57.508985       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:46:57.511590       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:46:57.511808       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:46:57.511839       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:46:57.513082       1 config.go:192] "Starting service config controller"
	I0520 11:46:57.513125       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:46:57.513148       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:46:57.513169       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:46:57.513669       1 config.go:319] "Starting node config controller"
	I0520 11:46:57.513696       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:46:57.613737       1 shared_informer.go:320] Caches are synced for node config
	I0520 11:46:57.613780       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 11:46:57.613879       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] <==
	I0520 11:46:53.835180       1 serving.go:380] Generated self-signed cert in-memory
	W0520 11:46:56.019468       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 11:46:56.019565       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 11:46:56.019575       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 11:46:56.019580       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 11:46:56.070776       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0520 11:46:56.070878       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:46:56.074681       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0520 11:46:56.074778       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:46:56.075416       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0520 11:46:56.075530       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0520 11:46:56.175578       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 12:06:51 default-k8s-diff-port-668445 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:06:59 default-k8s-diff-port-668445 kubelet[944]: E0520 12:06:59.738069     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 12:07:10 default-k8s-diff-port-668445 kubelet[944]: E0520 12:07:10.739208     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 12:07:22 default-k8s-diff-port-668445 kubelet[944]: E0520 12:07:22.737789     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 12:07:35 default-k8s-diff-port-668445 kubelet[944]: E0520 12:07:35.738050     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 12:07:50 default-k8s-diff-port-668445 kubelet[944]: E0520 12:07:50.737433     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 12:07:51 default-k8s-diff-port-668445 kubelet[944]: E0520 12:07:51.769688     944 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:07:51 default-k8s-diff-port-668445 kubelet[944]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:07:51 default-k8s-diff-port-668445 kubelet[944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:07:51 default-k8s-diff-port-668445 kubelet[944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:07:51 default-k8s-diff-port-668445 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:08:05 default-k8s-diff-port-668445 kubelet[944]: E0520 12:08:05.771414     944 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	May 20 12:08:05 default-k8s-diff-port-668445 kubelet[944]: E0520 12:08:05.771743     944 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	May 20 12:08:05 default-k8s-diff-port-668445 kubelet[944]: E0520 12:08:05.772153     944 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llfvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-cdjmz_kube-system(532bffab-0dd1-4097-b283-cb48a3010e23): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	May 20 12:08:05 default-k8s-diff-port-668445 kubelet[944]: E0520 12:08:05.772309     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 12:08:20 default-k8s-diff-port-668445 kubelet[944]: E0520 12:08:20.736877     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 12:08:33 default-k8s-diff-port-668445 kubelet[944]: E0520 12:08:33.740082     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 12:08:48 default-k8s-diff-port-668445 kubelet[944]: E0520 12:08:48.737024     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 12:08:51 default-k8s-diff-port-668445 kubelet[944]: E0520 12:08:51.770507     944 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:08:51 default-k8s-diff-port-668445 kubelet[944]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:08:51 default-k8s-diff-port-668445 kubelet[944]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:08:51 default-k8s-diff-port-668445 kubelet[944]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:08:51 default-k8s-diff-port-668445 kubelet[944]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:09:03 default-k8s-diff-port-668445 kubelet[944]: E0520 12:09:03.738987     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	May 20 12:09:18 default-k8s-diff-port-668445 kubelet[944]: E0520 12:09:18.738219     944 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cdjmz" podUID="532bffab-0dd1-4097-b283-cb48a3010e23"
	
	
	==> storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] <==
	I0520 11:46:57.332090       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0520 11:47:27.336345       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] <==
	I0520 11:47:28.073533       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 11:47:28.084926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 11:47:28.085372       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 11:47:45.489018       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 11:47:45.489175       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-668445_687eb3ed-7683-441d-b908-0ab449304a7a!
	I0520 11:47:45.490212       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8226dea9-06fa-4f0d-9e1d-8213276187a6", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-668445_687eb3ed-7683-441d-b908-0ab449304a7a became leader
	I0520 11:47:45.590132       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-668445_687eb3ed-7683-441d-b908-0ab449304a7a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-668445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-cdjmz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-668445 describe pod metrics-server-569cc877fc-cdjmz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-668445 describe pod metrics-server-569cc877fc-cdjmz: exit status 1 (98.867772ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-cdjmz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-668445 describe pod metrics-server-569cc877fc-cdjmz: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (322.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-814599 -n no-preload-814599
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-05-20 12:06:57.960325571 +0000 UTC m=+6387.672208404
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-814599 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-814599 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.268µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-814599 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-814599 -n no-preload-814599
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-814599 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-814599 logs -n 25: (1.241610566s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-814599             | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-120159                                        | pause-120159                 | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:39 UTC |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-793579 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | disable-driver-mounts-793579                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:41 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274976            | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-814599                  | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210420        | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-668445  | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC | 20 May 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274976                 | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210420             | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-668445       | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:44 UTC | 20 May 24 11:51 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 12:06 UTC | 20 May 24 12:06 UTC |
	| start   | -p newest-cni-111176 --memory=2200 --alsologtostderr   | newest-cni-111176            | jenkins | v1.33.1 | 20 May 24 12:06 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 12:06:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 12:06:39.349115   64874 out.go:291] Setting OutFile to fd 1 ...
	I0520 12:06:39.349377   64874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:06:39.349387   64874 out.go:304] Setting ErrFile to fd 2...
	I0520 12:06:39.349399   64874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 12:06:39.349549   64874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 12:06:39.350107   64874 out.go:298] Setting JSON to false
	I0520 12:06:39.351022   64874 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6542,"bootTime":1716200257,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 12:06:39.351085   64874 start.go:139] virtualization: kvm guest
	I0520 12:06:39.353817   64874 out.go:177] * [newest-cni-111176] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 12:06:39.355350   64874 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 12:06:39.355410   64874 notify.go:220] Checking for updates...
	I0520 12:06:39.356649   64874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 12:06:39.358005   64874 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 12:06:39.359375   64874 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 12:06:39.360503   64874 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 12:06:39.361852   64874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 12:06:39.363764   64874 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:06:39.363913   64874 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:06:39.364052   64874 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 12:06:39.364138   64874 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 12:06:39.402281   64874 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 12:06:39.403659   64874 start.go:297] selected driver: kvm2
	I0520 12:06:39.403672   64874 start.go:901] validating driver "kvm2" against <nil>
	I0520 12:06:39.403683   64874 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 12:06:39.404335   64874 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:06:39.404411   64874 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 12:06:39.419060   64874 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 12:06:39.419102   64874 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0520 12:06:39.419123   64874 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0520 12:06:39.419315   64874 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0520 12:06:39.419337   64874 cni.go:84] Creating CNI manager for ""
	I0520 12:06:39.419344   64874 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 12:06:39.419351   64874 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 12:06:39.419392   64874 start.go:340] cluster config:
	{Name:newest-cni-111176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-111176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 12:06:39.419472   64874 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 12:06:39.421580   64874 out.go:177] * Starting "newest-cni-111176" primary control-plane node in "newest-cni-111176" cluster
	I0520 12:06:39.422607   64874 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 12:06:39.422657   64874 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 12:06:39.422667   64874 cache.go:56] Caching tarball of preloaded images
	I0520 12:06:39.422740   64874 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 12:06:39.422750   64874 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 12:06:39.422831   64874 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/newest-cni-111176/config.json ...
	I0520 12:06:39.422846   64874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/newest-cni-111176/config.json: {Name:mkbf82d3e0b24b1e646784b03f44e760d3ca4305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 12:06:39.422971   64874 start.go:360] acquireMachinesLock for newest-cni-111176: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 12:06:39.423002   64874 start.go:364] duration metric: took 17.243µs to acquireMachinesLock for "newest-cni-111176"
	I0520 12:06:39.423023   64874 start.go:93] Provisioning new machine with config: &{Name:newest-cni-111176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:newest-cni-111176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 12:06:39.423080   64874 start.go:125] createHost starting for "" (driver="kvm2")
	I0520 12:06:39.424627   64874 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0520 12:06:39.424743   64874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 12:06:39.424786   64874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 12:06:39.439755   64874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41847
	I0520 12:06:39.440182   64874 main.go:141] libmachine: () Calling .GetVersion
	I0520 12:06:39.440736   64874 main.go:141] libmachine: Using API Version  1
	I0520 12:06:39.440756   64874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 12:06:39.441088   64874 main.go:141] libmachine: () Calling .GetMachineName
	I0520 12:06:39.441281   64874 main.go:141] libmachine: (newest-cni-111176) Calling .GetMachineName
	I0520 12:06:39.441449   64874 main.go:141] libmachine: (newest-cni-111176) Calling .DriverName
	I0520 12:06:39.441615   64874 start.go:159] libmachine.API.Create for "newest-cni-111176" (driver="kvm2")
	I0520 12:06:39.441645   64874 client.go:168] LocalClient.Create starting
	I0520 12:06:39.441676   64874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem
	I0520 12:06:39.441709   64874 main.go:141] libmachine: Decoding PEM data...
	I0520 12:06:39.441730   64874 main.go:141] libmachine: Parsing certificate...
	I0520 12:06:39.441795   64874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem
	I0520 12:06:39.441826   64874 main.go:141] libmachine: Decoding PEM data...
	I0520 12:06:39.441848   64874 main.go:141] libmachine: Parsing certificate...
	I0520 12:06:39.441876   64874 main.go:141] libmachine: Running pre-create checks...
	I0520 12:06:39.441900   64874 main.go:141] libmachine: (newest-cni-111176) Calling .PreCreateCheck
	I0520 12:06:39.442294   64874 main.go:141] libmachine: (newest-cni-111176) Calling .GetConfigRaw
	I0520 12:06:39.442635   64874 main.go:141] libmachine: Creating machine...
	I0520 12:06:39.442648   64874 main.go:141] libmachine: (newest-cni-111176) Calling .Create
	I0520 12:06:39.442758   64874 main.go:141] libmachine: (newest-cni-111176) Creating KVM machine...
	I0520 12:06:39.443917   64874 main.go:141] libmachine: (newest-cni-111176) DBG | found existing default KVM network
	I0520 12:06:39.445002   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:39.444839   64897 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:24:37:ae} reservation:<nil>}
	I0520 12:06:39.445876   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:39.445782   64897 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:d6:3c:1b} reservation:<nil>}
	I0520 12:06:39.446820   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:39.446715   64897 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:f1:08:22} reservation:<nil>}
	I0520 12:06:39.447825   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:39.447753   64897 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289890}
	I0520 12:06:39.447876   64874 main.go:141] libmachine: (newest-cni-111176) DBG | created network xml: 
	I0520 12:06:39.447885   64874 main.go:141] libmachine: (newest-cni-111176) DBG | <network>
	I0520 12:06:39.447895   64874 main.go:141] libmachine: (newest-cni-111176) DBG |   <name>mk-newest-cni-111176</name>
	I0520 12:06:39.447904   64874 main.go:141] libmachine: (newest-cni-111176) DBG |   <dns enable='no'/>
	I0520 12:06:39.447911   64874 main.go:141] libmachine: (newest-cni-111176) DBG |   
	I0520 12:06:39.447925   64874 main.go:141] libmachine: (newest-cni-111176) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0520 12:06:39.447946   64874 main.go:141] libmachine: (newest-cni-111176) DBG |     <dhcp>
	I0520 12:06:39.447957   64874 main.go:141] libmachine: (newest-cni-111176) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0520 12:06:39.447966   64874 main.go:141] libmachine: (newest-cni-111176) DBG |     </dhcp>
	I0520 12:06:39.448030   64874 main.go:141] libmachine: (newest-cni-111176) DBG |   </ip>
	I0520 12:06:39.448050   64874 main.go:141] libmachine: (newest-cni-111176) DBG |   
	I0520 12:06:39.448056   64874 main.go:141] libmachine: (newest-cni-111176) DBG | </network>
	I0520 12:06:39.448064   64874 main.go:141] libmachine: (newest-cni-111176) DBG | 
	I0520 12:06:39.453369   64874 main.go:141] libmachine: (newest-cni-111176) DBG | trying to create private KVM network mk-newest-cni-111176 192.168.72.0/24...
	I0520 12:06:39.521641   64874 main.go:141] libmachine: (newest-cni-111176) DBG | private KVM network mk-newest-cni-111176 192.168.72.0/24 created
	I0520 12:06:39.521672   64874 main.go:141] libmachine: (newest-cni-111176) Setting up store path in /home/jenkins/minikube-integration/18925-3819/.minikube/machines/newest-cni-111176 ...
	I0520 12:06:39.521699   64874 main.go:141] libmachine: (newest-cni-111176) Building disk image from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 12:06:39.521739   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:39.521667   64897 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 12:06:39.521905   64874 main.go:141] libmachine: (newest-cni-111176) Downloading /home/jenkins/minikube-integration/18925-3819/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso...
	I0520 12:06:39.740777   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:39.740637   64897 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/newest-cni-111176/id_rsa...
	I0520 12:06:39.835333   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:39.835227   64897 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/newest-cni-111176/newest-cni-111176.rawdisk...
	I0520 12:06:39.835363   64874 main.go:141] libmachine: (newest-cni-111176) DBG | Writing magic tar header
	I0520 12:06:39.835380   64874 main.go:141] libmachine: (newest-cni-111176) DBG | Writing SSH key tar header
	I0520 12:06:39.835393   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:39.835358   64897 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/newest-cni-111176 ...
	I0520 12:06:39.835536   64874 main.go:141] libmachine: (newest-cni-111176) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/newest-cni-111176
	I0520 12:06:39.835565   64874 main.go:141] libmachine: (newest-cni-111176) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube/machines
	I0520 12:06:39.835580   64874 main.go:141] libmachine: (newest-cni-111176) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines/newest-cni-111176 (perms=drwx------)
	I0520 12:06:39.835595   64874 main.go:141] libmachine: (newest-cni-111176) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 12:06:39.835608   64874 main.go:141] libmachine: (newest-cni-111176) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube/machines (perms=drwxr-xr-x)
	I0520 12:06:39.835633   64874 main.go:141] libmachine: (newest-cni-111176) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819/.minikube (perms=drwxr-xr-x)
	I0520 12:06:39.835648   64874 main.go:141] libmachine: (newest-cni-111176) Setting executable bit set on /home/jenkins/minikube-integration/18925-3819 (perms=drwxrwxr-x)
	I0520 12:06:39.835659   64874 main.go:141] libmachine: (newest-cni-111176) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18925-3819
	I0520 12:06:39.835685   64874 main.go:141] libmachine: (newest-cni-111176) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0520 12:06:39.835702   64874 main.go:141] libmachine: (newest-cni-111176) DBG | Checking permissions on dir: /home/jenkins
	I0520 12:06:39.835715   64874 main.go:141] libmachine: (newest-cni-111176) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0520 12:06:39.835730   64874 main.go:141] libmachine: (newest-cni-111176) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0520 12:06:39.835741   64874 main.go:141] libmachine: (newest-cni-111176) Creating domain...
	I0520 12:06:39.835756   64874 main.go:141] libmachine: (newest-cni-111176) DBG | Checking permissions on dir: /home
	I0520 12:06:39.835767   64874 main.go:141] libmachine: (newest-cni-111176) DBG | Skipping /home - not owner
	I0520 12:06:39.836885   64874 main.go:141] libmachine: (newest-cni-111176) define libvirt domain using xml: 
	I0520 12:06:39.836909   64874 main.go:141] libmachine: (newest-cni-111176) <domain type='kvm'>
	I0520 12:06:39.836920   64874 main.go:141] libmachine: (newest-cni-111176)   <name>newest-cni-111176</name>
	I0520 12:06:39.836944   64874 main.go:141] libmachine: (newest-cni-111176)   <memory unit='MiB'>2200</memory>
	I0520 12:06:39.836956   64874 main.go:141] libmachine: (newest-cni-111176)   <vcpu>2</vcpu>
	I0520 12:06:39.836962   64874 main.go:141] libmachine: (newest-cni-111176)   <features>
	I0520 12:06:39.836973   64874 main.go:141] libmachine: (newest-cni-111176)     <acpi/>
	I0520 12:06:39.836981   64874 main.go:141] libmachine: (newest-cni-111176)     <apic/>
	I0520 12:06:39.836986   64874 main.go:141] libmachine: (newest-cni-111176)     <pae/>
	I0520 12:06:39.836993   64874 main.go:141] libmachine: (newest-cni-111176)     
	I0520 12:06:39.837001   64874 main.go:141] libmachine: (newest-cni-111176)   </features>
	I0520 12:06:39.837013   64874 main.go:141] libmachine: (newest-cni-111176)   <cpu mode='host-passthrough'>
	I0520 12:06:39.837030   64874 main.go:141] libmachine: (newest-cni-111176)   
	I0520 12:06:39.837045   64874 main.go:141] libmachine: (newest-cni-111176)   </cpu>
	I0520 12:06:39.837050   64874 main.go:141] libmachine: (newest-cni-111176)   <os>
	I0520 12:06:39.837055   64874 main.go:141] libmachine: (newest-cni-111176)     <type>hvm</type>
	I0520 12:06:39.837060   64874 main.go:141] libmachine: (newest-cni-111176)     <boot dev='cdrom'/>
	I0520 12:06:39.837065   64874 main.go:141] libmachine: (newest-cni-111176)     <boot dev='hd'/>
	I0520 12:06:39.837070   64874 main.go:141] libmachine: (newest-cni-111176)     <bootmenu enable='no'/>
	I0520 12:06:39.837080   64874 main.go:141] libmachine: (newest-cni-111176)   </os>
	I0520 12:06:39.837088   64874 main.go:141] libmachine: (newest-cni-111176)   <devices>
	I0520 12:06:39.837100   64874 main.go:141] libmachine: (newest-cni-111176)     <disk type='file' device='cdrom'>
	I0520 12:06:39.837130   64874 main.go:141] libmachine: (newest-cni-111176)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/newest-cni-111176/boot2docker.iso'/>
	I0520 12:06:39.837157   64874 main.go:141] libmachine: (newest-cni-111176)       <target dev='hdc' bus='scsi'/>
	I0520 12:06:39.837173   64874 main.go:141] libmachine: (newest-cni-111176)       <readonly/>
	I0520 12:06:39.837190   64874 main.go:141] libmachine: (newest-cni-111176)     </disk>
	I0520 12:06:39.837212   64874 main.go:141] libmachine: (newest-cni-111176)     <disk type='file' device='disk'>
	I0520 12:06:39.837223   64874 main.go:141] libmachine: (newest-cni-111176)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0520 12:06:39.837233   64874 main.go:141] libmachine: (newest-cni-111176)       <source file='/home/jenkins/minikube-integration/18925-3819/.minikube/machines/newest-cni-111176/newest-cni-111176.rawdisk'/>
	I0520 12:06:39.837247   64874 main.go:141] libmachine: (newest-cni-111176)       <target dev='hda' bus='virtio'/>
	I0520 12:06:39.837259   64874 main.go:141] libmachine: (newest-cni-111176)     </disk>
	I0520 12:06:39.837269   64874 main.go:141] libmachine: (newest-cni-111176)     <interface type='network'>
	I0520 12:06:39.837283   64874 main.go:141] libmachine: (newest-cni-111176)       <source network='mk-newest-cni-111176'/>
	I0520 12:06:39.837293   64874 main.go:141] libmachine: (newest-cni-111176)       <model type='virtio'/>
	I0520 12:06:39.837306   64874 main.go:141] libmachine: (newest-cni-111176)     </interface>
	I0520 12:06:39.837316   64874 main.go:141] libmachine: (newest-cni-111176)     <interface type='network'>
	I0520 12:06:39.837329   64874 main.go:141] libmachine: (newest-cni-111176)       <source network='default'/>
	I0520 12:06:39.837340   64874 main.go:141] libmachine: (newest-cni-111176)       <model type='virtio'/>
	I0520 12:06:39.837353   64874 main.go:141] libmachine: (newest-cni-111176)     </interface>
	I0520 12:06:39.837360   64874 main.go:141] libmachine: (newest-cni-111176)     <serial type='pty'>
	I0520 12:06:39.837371   64874 main.go:141] libmachine: (newest-cni-111176)       <target port='0'/>
	I0520 12:06:39.837379   64874 main.go:141] libmachine: (newest-cni-111176)     </serial>
	I0520 12:06:39.837390   64874 main.go:141] libmachine: (newest-cni-111176)     <console type='pty'>
	I0520 12:06:39.837399   64874 main.go:141] libmachine: (newest-cni-111176)       <target type='serial' port='0'/>
	I0520 12:06:39.837414   64874 main.go:141] libmachine: (newest-cni-111176)     </console>
	I0520 12:06:39.837432   64874 main.go:141] libmachine: (newest-cni-111176)     <rng model='virtio'>
	I0520 12:06:39.837445   64874 main.go:141] libmachine: (newest-cni-111176)       <backend model='random'>/dev/random</backend>
	I0520 12:06:39.837455   64874 main.go:141] libmachine: (newest-cni-111176)     </rng>
	I0520 12:06:39.837461   64874 main.go:141] libmachine: (newest-cni-111176)     
	I0520 12:06:39.837466   64874 main.go:141] libmachine: (newest-cni-111176)     
	I0520 12:06:39.837472   64874 main.go:141] libmachine: (newest-cni-111176)   </devices>
	I0520 12:06:39.837478   64874 main.go:141] libmachine: (newest-cni-111176) </domain>
	I0520 12:06:39.837496   64874 main.go:141] libmachine: (newest-cni-111176) 
	I0520 12:06:39.841738   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:f9:c5:1d in network default
	I0520 12:06:39.842401   64874 main.go:141] libmachine: (newest-cni-111176) Ensuring networks are active...
	I0520 12:06:39.842426   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:39.843066   64874 main.go:141] libmachine: (newest-cni-111176) Ensuring network default is active
	I0520 12:06:39.843360   64874 main.go:141] libmachine: (newest-cni-111176) Ensuring network mk-newest-cni-111176 is active
	I0520 12:06:39.843836   64874 main.go:141] libmachine: (newest-cni-111176) Getting domain xml...
	I0520 12:06:39.844417   64874 main.go:141] libmachine: (newest-cni-111176) Creating domain...
	I0520 12:06:41.092992   64874 main.go:141] libmachine: (newest-cni-111176) Waiting to get IP...
	I0520 12:06:41.093788   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:41.094222   64874 main.go:141] libmachine: (newest-cni-111176) DBG | unable to find current IP address of domain newest-cni-111176 in network mk-newest-cni-111176
	I0520 12:06:41.094250   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:41.094189   64897 retry.go:31] will retry after 207.201784ms: waiting for machine to come up
	I0520 12:06:41.302600   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:41.303096   64874 main.go:141] libmachine: (newest-cni-111176) DBG | unable to find current IP address of domain newest-cni-111176 in network mk-newest-cni-111176
	I0520 12:06:41.303124   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:41.303042   64897 retry.go:31] will retry after 279.118848ms: waiting for machine to come up
	I0520 12:06:41.584327   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:41.584786   64874 main.go:141] libmachine: (newest-cni-111176) DBG | unable to find current IP address of domain newest-cni-111176 in network mk-newest-cni-111176
	I0520 12:06:41.584819   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:41.584730   64897 retry.go:31] will retry after 299.859638ms: waiting for machine to come up
	I0520 12:06:41.886314   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:41.886869   64874 main.go:141] libmachine: (newest-cni-111176) DBG | unable to find current IP address of domain newest-cni-111176 in network mk-newest-cni-111176
	I0520 12:06:41.886895   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:41.886825   64897 retry.go:31] will retry after 583.173685ms: waiting for machine to come up
	I0520 12:06:42.471651   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:42.472093   64874 main.go:141] libmachine: (newest-cni-111176) DBG | unable to find current IP address of domain newest-cni-111176 in network mk-newest-cni-111176
	I0520 12:06:42.472123   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:42.472040   64897 retry.go:31] will retry after 589.073479ms: waiting for machine to come up
	I0520 12:06:43.062694   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:43.063163   64874 main.go:141] libmachine: (newest-cni-111176) DBG | unable to find current IP address of domain newest-cni-111176 in network mk-newest-cni-111176
	I0520 12:06:43.063202   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:43.063089   64897 retry.go:31] will retry after 851.625717ms: waiting for machine to come up
	I0520 12:06:43.916596   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:43.917004   64874 main.go:141] libmachine: (newest-cni-111176) DBG | unable to find current IP address of domain newest-cni-111176 in network mk-newest-cni-111176
	I0520 12:06:43.917030   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:43.916965   64897 retry.go:31] will retry after 929.582358ms: waiting for machine to come up
	I0520 12:06:44.848551   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:44.849015   64874 main.go:141] libmachine: (newest-cni-111176) DBG | unable to find current IP address of domain newest-cni-111176 in network mk-newest-cni-111176
	I0520 12:06:44.849039   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:44.848978   64897 retry.go:31] will retry after 1.352936906s: waiting for machine to come up
	I0520 12:06:46.204159   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:46.204660   64874 main.go:141] libmachine: (newest-cni-111176) DBG | unable to find current IP address of domain newest-cni-111176 in network mk-newest-cni-111176
	I0520 12:06:46.204687   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:46.204615   64897 retry.go:31] will retry after 1.293611509s: waiting for machine to come up
	I0520 12:06:47.500038   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:47.500601   64874 main.go:141] libmachine: (newest-cni-111176) DBG | unable to find current IP address of domain newest-cni-111176 in network mk-newest-cni-111176
	I0520 12:06:47.500625   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:47.500551   64897 retry.go:31] will retry after 2.234848391s: waiting for machine to come up
	I0520 12:06:49.736694   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:49.737181   64874 main.go:141] libmachine: (newest-cni-111176) DBG | unable to find current IP address of domain newest-cni-111176 in network mk-newest-cni-111176
	I0520 12:06:49.737229   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:49.737140   64897 retry.go:31] will retry after 2.546227858s: waiting for machine to come up
	I0520 12:06:52.285945   64874 main.go:141] libmachine: (newest-cni-111176) DBG | domain newest-cni-111176 has defined MAC address 52:54:00:b1:8e:02 in network mk-newest-cni-111176
	I0520 12:06:52.286347   64874 main.go:141] libmachine: (newest-cni-111176) DBG | unable to find current IP address of domain newest-cni-111176 in network mk-newest-cni-111176
	I0520 12:06:52.286379   64874 main.go:141] libmachine: (newest-cni-111176) DBG | I0520 12:06:52.286282   64897 retry.go:31] will retry after 3.292994535s: waiting for machine to come up
	
	
	==> CRI-O <==
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.573706785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206818573684913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf251306-a064-4476-bde3-021aa0945466 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.577884931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5191814-e4b2-4912-9c6d-e8646113f5f5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.577963618Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5191814-e4b2-4912-9c6d-e8646113f5f5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.579014938Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b10ea977d0579b15e4c976dff35f3fcab52bf2c203dac63d829d0c02b894748,PodSandboxId:3bc1c76e2967b9fa8de44519b1eb9cbfc78eb7d1ccc5bd98c4c16de3273f4782,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205950639599060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b,},Annotations:map[string]string{io.kubernetes.container.hash: 753fd492,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9ee696baddd009802bd6e05ef865d184772139e53d7564dbfd1ce3a1121e29,PodSandboxId:186a07c2b9dcefe382bbd27f96a750adcf526c549de07e8445fd7c753f204575,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205950189807669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qdbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98bbe29a-5bd1-4568-ac29-5bf030c38f79,},Annotations:map[string]string{io.kubernetes.container.hash: dc33eadc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621a853961ce5be7b31f6964d770362d83ef026be2ce991d5fbb293230f1943b,PodSandboxId:c1584e5ace282fb23966c7e3abd4d2284ebe8fa48b6a0e4fb41ce76fdadb66e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205950088941153,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbz7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
6efcc0-4f91-4c5d-8781-0bb5f22187ab,},Annotations:map[string]string{io.kubernetes.container.hash: 98581ddf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852810a24d4b89497902fc14dde5ddc04cb4cdd9b7979371a2cf1a7213fd5edb,PodSandboxId:454543b0cb4cfce5dd4d8d64d5e7412adb9adb38ce89f77793c9ef74eddfcf04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1716205949561791299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkfkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b7143-7bff-47f3-ab69-cd87eec68013,},Annotations:map[string]string{io.kubernetes.container.hash: 15dd8589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c62ae52ced8bb41a38ab19762cb1fa3542762a412bd93dc6a09811a145f8e292,PodSandboxId:a935507c5eb9bba1ea795c3b839e245fb248b37c27d2d057004a28cd3faac6e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205929646435708,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ce6754bf51d7f1357b80d45b5714b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ba469b3ff42a4f8c41b04337320a7e0c135fa756b3360fe36653a608d75d88,PodSandboxId:33d15f48f264c9fbf0c963787b156cec2212ef022f8bec3843226374633c9a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:171620592967630
4710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e26e37c75cdd5dbbc768c26af58ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970dc977aa55a0ff29055628d97843b7a29d0fcee72db7dce380c35ded7a50d6,PodSandboxId:7425c02584ccb23e557f28ec2dd86e595082d99cec925fcd8f3fe8fbb927bd33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205929615674493,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1a24551000cc8a34f7c390fab6ba2c,},Annotations:map[string]string{io.kubernetes.container.hash: a808cf68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee432c68f3b3373adaf6ef970aaeeeb5a9c31c0a3c29c4e807e441751469e1,PodSandboxId:5cceeab3a53cb296769746e8be114b3b7bba322bc8768756752cc5f9a4e3fcc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205929599029501,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 419d30f9580a9703df9b861bc34ec254,},Annotations:map[string]string{io.kubernetes.container.hash: 62c19597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5191814-e4b2-4912-9c6d-e8646113f5f5 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.630184760Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06859852-718f-4dc4-b97b-7e1a9de7e685 name=/runtime.v1.RuntimeService/Version
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.630316522Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06859852-718f-4dc4-b97b-7e1a9de7e685 name=/runtime.v1.RuntimeService/Version
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.632085150Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2382c7cf-0242-4113-9276-caccaf3e689d name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.632515246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206818632489141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2382c7cf-0242-4113-9276-caccaf3e689d name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.633199625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8636dcf-71f4-4bf7-869f-f4877ce7157d name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.633277320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8636dcf-71f4-4bf7-869f-f4877ce7157d name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.633468382Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b10ea977d0579b15e4c976dff35f3fcab52bf2c203dac63d829d0c02b894748,PodSandboxId:3bc1c76e2967b9fa8de44519b1eb9cbfc78eb7d1ccc5bd98c4c16de3273f4782,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205950639599060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b,},Annotations:map[string]string{io.kubernetes.container.hash: 753fd492,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9ee696baddd009802bd6e05ef865d184772139e53d7564dbfd1ce3a1121e29,PodSandboxId:186a07c2b9dcefe382bbd27f96a750adcf526c549de07e8445fd7c753f204575,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205950189807669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qdbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98bbe29a-5bd1-4568-ac29-5bf030c38f79,},Annotations:map[string]string{io.kubernetes.container.hash: dc33eadc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621a853961ce5be7b31f6964d770362d83ef026be2ce991d5fbb293230f1943b,PodSandboxId:c1584e5ace282fb23966c7e3abd4d2284ebe8fa48b6a0e4fb41ce76fdadb66e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205950088941153,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbz7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
6efcc0-4f91-4c5d-8781-0bb5f22187ab,},Annotations:map[string]string{io.kubernetes.container.hash: 98581ddf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852810a24d4b89497902fc14dde5ddc04cb4cdd9b7979371a2cf1a7213fd5edb,PodSandboxId:454543b0cb4cfce5dd4d8d64d5e7412adb9adb38ce89f77793c9ef74eddfcf04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1716205949561791299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkfkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b7143-7bff-47f3-ab69-cd87eec68013,},Annotations:map[string]string{io.kubernetes.container.hash: 15dd8589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c62ae52ced8bb41a38ab19762cb1fa3542762a412bd93dc6a09811a145f8e292,PodSandboxId:a935507c5eb9bba1ea795c3b839e245fb248b37c27d2d057004a28cd3faac6e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205929646435708,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ce6754bf51d7f1357b80d45b5714b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ba469b3ff42a4f8c41b04337320a7e0c135fa756b3360fe36653a608d75d88,PodSandboxId:33d15f48f264c9fbf0c963787b156cec2212ef022f8bec3843226374633c9a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:171620592967630
4710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e26e37c75cdd5dbbc768c26af58ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970dc977aa55a0ff29055628d97843b7a29d0fcee72db7dce380c35ded7a50d6,PodSandboxId:7425c02584ccb23e557f28ec2dd86e595082d99cec925fcd8f3fe8fbb927bd33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205929615674493,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1a24551000cc8a34f7c390fab6ba2c,},Annotations:map[string]string{io.kubernetes.container.hash: a808cf68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee432c68f3b3373adaf6ef970aaeeeb5a9c31c0a3c29c4e807e441751469e1,PodSandboxId:5cceeab3a53cb296769746e8be114b3b7bba322bc8768756752cc5f9a4e3fcc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205929599029501,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 419d30f9580a9703df9b861bc34ec254,},Annotations:map[string]string{io.kubernetes.container.hash: 62c19597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8636dcf-71f4-4bf7-869f-f4877ce7157d name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.678940721Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7a664e9-85f3-4d96-b7af-8e5543dc2254 name=/runtime.v1.RuntimeService/Version
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.679041456Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7a664e9-85f3-4d96-b7af-8e5543dc2254 name=/runtime.v1.RuntimeService/Version
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.681013943Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48ae24a5-bd57-47ab-9aea-ad975e478098 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.681709116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206818681681774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48ae24a5-bd57-47ab-9aea-ad975e478098 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.682366100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aee3819d-c6b1-4c93-ac54-3973ba98efc2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.682426165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aee3819d-c6b1-4c93-ac54-3973ba98efc2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.682865636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b10ea977d0579b15e4c976dff35f3fcab52bf2c203dac63d829d0c02b894748,PodSandboxId:3bc1c76e2967b9fa8de44519b1eb9cbfc78eb7d1ccc5bd98c4c16de3273f4782,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205950639599060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b,},Annotations:map[string]string{io.kubernetes.container.hash: 753fd492,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9ee696baddd009802bd6e05ef865d184772139e53d7564dbfd1ce3a1121e29,PodSandboxId:186a07c2b9dcefe382bbd27f96a750adcf526c549de07e8445fd7c753f204575,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205950189807669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qdbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98bbe29a-5bd1-4568-ac29-5bf030c38f79,},Annotations:map[string]string{io.kubernetes.container.hash: dc33eadc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621a853961ce5be7b31f6964d770362d83ef026be2ce991d5fbb293230f1943b,PodSandboxId:c1584e5ace282fb23966c7e3abd4d2284ebe8fa48b6a0e4fb41ce76fdadb66e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205950088941153,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbz7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
6efcc0-4f91-4c5d-8781-0bb5f22187ab,},Annotations:map[string]string{io.kubernetes.container.hash: 98581ddf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852810a24d4b89497902fc14dde5ddc04cb4cdd9b7979371a2cf1a7213fd5edb,PodSandboxId:454543b0cb4cfce5dd4d8d64d5e7412adb9adb38ce89f77793c9ef74eddfcf04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1716205949561791299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkfkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b7143-7bff-47f3-ab69-cd87eec68013,},Annotations:map[string]string{io.kubernetes.container.hash: 15dd8589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c62ae52ced8bb41a38ab19762cb1fa3542762a412bd93dc6a09811a145f8e292,PodSandboxId:a935507c5eb9bba1ea795c3b839e245fb248b37c27d2d057004a28cd3faac6e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205929646435708,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ce6754bf51d7f1357b80d45b5714b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ba469b3ff42a4f8c41b04337320a7e0c135fa756b3360fe36653a608d75d88,PodSandboxId:33d15f48f264c9fbf0c963787b156cec2212ef022f8bec3843226374633c9a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:171620592967630
4710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e26e37c75cdd5dbbc768c26af58ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970dc977aa55a0ff29055628d97843b7a29d0fcee72db7dce380c35ded7a50d6,PodSandboxId:7425c02584ccb23e557f28ec2dd86e595082d99cec925fcd8f3fe8fbb927bd33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205929615674493,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1a24551000cc8a34f7c390fab6ba2c,},Annotations:map[string]string{io.kubernetes.container.hash: a808cf68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee432c68f3b3373adaf6ef970aaeeeb5a9c31c0a3c29c4e807e441751469e1,PodSandboxId:5cceeab3a53cb296769746e8be114b3b7bba322bc8768756752cc5f9a4e3fcc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205929599029501,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 419d30f9580a9703df9b861bc34ec254,},Annotations:map[string]string{io.kubernetes.container.hash: 62c19597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aee3819d-c6b1-4c93-ac54-3973ba98efc2 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.715594801Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d57949d0-ff7e-4faa-8f6d-f43228fb4919 name=/runtime.v1.RuntimeService/Version
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.715668910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d57949d0-ff7e-4faa-8f6d-f43228fb4919 name=/runtime.v1.RuntimeService/Version
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.717263953Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8eb12b8f-9d2e-4491-a5e9-b5f3c91bbfbd name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.717600275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206818717578163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8eb12b8f-9d2e-4491-a5e9-b5f3c91bbfbd name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.718413954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e00e2038-7513-43a2-ab35-b028d8886544 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.718468319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e00e2038-7513-43a2-ab35-b028d8886544 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:58 no-preload-814599 crio[725]: time="2024-05-20 12:06:58.718637243Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b10ea977d0579b15e4c976dff35f3fcab52bf2c203dac63d829d0c02b894748,PodSandboxId:3bc1c76e2967b9fa8de44519b1eb9cbfc78eb7d1ccc5bd98c4c16de3273f4782,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716205950639599060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b,},Annotations:map[string]string{io.kubernetes.container.hash: 753fd492,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9ee696baddd009802bd6e05ef865d184772139e53d7564dbfd1ce3a1121e29,PodSandboxId:186a07c2b9dcefe382bbd27f96a750adcf526c549de07e8445fd7c753f204575,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205950189807669,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8qdbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98bbe29a-5bd1-4568-ac29-5bf030c38f79,},Annotations:map[string]string{io.kubernetes.container.hash: dc33eadc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621a853961ce5be7b31f6964d770362d83ef026be2ce991d5fbb293230f1943b,PodSandboxId:c1584e5ace282fb23966c7e3abd4d2284ebe8fa48b6a0e4fb41ce76fdadb66e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716205950088941153,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbz7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3
6efcc0-4f91-4c5d-8781-0bb5f22187ab,},Annotations:map[string]string{io.kubernetes.container.hash: 98581ddf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:852810a24d4b89497902fc14dde5ddc04cb4cdd9b7979371a2cf1a7213fd5edb,PodSandboxId:454543b0cb4cfce5dd4d8d64d5e7412adb9adb38ce89f77793c9ef74eddfcf04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1716205949561791299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wkfkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b7143-7bff-47f3-ab69-cd87eec68013,},Annotations:map[string]string{io.kubernetes.container.hash: 15dd8589,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c62ae52ced8bb41a38ab19762cb1fa3542762a412bd93dc6a09811a145f8e292,PodSandboxId:a935507c5eb9bba1ea795c3b839e245fb248b37c27d2d057004a28cd3faac6e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716205929646435708,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ce6754bf51d7f1357b80d45b5714b,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ba469b3ff42a4f8c41b04337320a7e0c135fa756b3360fe36653a608d75d88,PodSandboxId:33d15f48f264c9fbf0c963787b156cec2212ef022f8bec3843226374633c9a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:171620592967630
4710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e26e37c75cdd5dbbc768c26af58ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970dc977aa55a0ff29055628d97843b7a29d0fcee72db7dce380c35ded7a50d6,PodSandboxId:7425c02584ccb23e557f28ec2dd86e595082d99cec925fcd8f3fe8fbb927bd33,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716205929615674493,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee1a24551000cc8a34f7c390fab6ba2c,},Annotations:map[string]string{io.kubernetes.container.hash: a808cf68,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40ee432c68f3b3373adaf6ef970aaeeeb5a9c31c0a3c29c4e807e441751469e1,PodSandboxId:5cceeab3a53cb296769746e8be114b3b7bba322bc8768756752cc5f9a4e3fcc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716205929599029501,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-814599,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 419d30f9580a9703df9b861bc34ec254,},Annotations:map[string]string{io.kubernetes.container.hash: 62c19597,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e00e2038-7513-43a2-ab35-b028d8886544 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0b10ea977d057       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   3bc1c76e2967b       storage-provisioner
	ac9ee696baddd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   186a07c2b9dce       coredns-7db6d8ff4d-8qdbk
	621a853961ce5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   c1584e5ace282       coredns-7db6d8ff4d-kbz7d
	852810a24d4b8       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   14 minutes ago      Running             kube-proxy                0                   454543b0cb4cf       kube-proxy-wkfkc
	10ba469b3ff42       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   14 minutes ago      Running             kube-scheduler            2                   33d15f48f264c       kube-scheduler-no-preload-814599
	c62ae52ced8bb       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   14 minutes ago      Running             kube-controller-manager   2                   a935507c5eb9b       kube-controller-manager-no-preload-814599
	970dc977aa55a       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   14 minutes ago      Running             kube-apiserver            2                   7425c02584ccb       kube-apiserver-no-preload-814599
	40ee432c68f3b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   5cceeab3a53cb       etcd-no-preload-814599
	
	
	==> coredns [621a853961ce5be7b31f6964d770362d83ef026be2ce991d5fbb293230f1943b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ac9ee696baddd009802bd6e05ef865d184772139e53d7564dbfd1ce3a1121e29] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-814599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-814599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=no-preload-814599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T11_52_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:52:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-814599
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 12:06:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 12:02:47 +0000   Mon, 20 May 2024 11:52:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 12:02:47 +0000   Mon, 20 May 2024 11:52:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 12:02:47 +0000   Mon, 20 May 2024 11:52:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 12:02:47 +0000   Mon, 20 May 2024 11:52:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    no-preload-814599
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2f29426d32543759e35a87fdce091d7
	  System UUID:                f2f29426-d325-4375-9e35-a87fdce091d7
	  Boot ID:                    5e6ed7bc-04cd-434c-a77d-3c174a8c5d2f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8qdbk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-kbz7d                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-814599                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-814599             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-814599    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-wkfkc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-814599             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-569cc877fc-lggvs              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-814599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-814599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-814599 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-814599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-814599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-814599 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-814599 event: Registered Node no-preload-814599 in Controller
	
	
	==> dmesg <==
	[  +0.041288] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.795774] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.543045] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.643245] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 11:47] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.056955] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073880] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.205010] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.121108] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.295123] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[ +16.411629] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.069163] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.905885] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +3.470549] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.795070] kauditd_printk_skb: 37 callbacks suppressed
	[  +7.105964] kauditd_printk_skb: 35 callbacks suppressed
	[May20 11:52] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.519819] systemd-fstab-generator[4006]: Ignoring "noauto" option for root device
	[  +4.447525] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.928669] systemd-fstab-generator[4327]: Ignoring "noauto" option for root device
	[ +13.894593] systemd-fstab-generator[4538]: Ignoring "noauto" option for root device
	[  +0.114607] kauditd_printk_skb: 14 callbacks suppressed
	[May20 11:53] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [40ee432c68f3b3373adaf6ef970aaeeeb5a9c31c0a3c29c4e807e441751469e1] <==
	{"level":"info","ts":"2024-05-20T11:52:09.962476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d switched to configuration voters=(10357203766055541037)"}
	{"level":"info","ts":"2024-05-20T11:52:09.962708Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","added-peer-id":"8fbc2df34e14192d","added-peer-peer-urls":["https://192.168.39.185:2380"]}
	{"level":"info","ts":"2024-05-20T11:52:09.963049Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-05-20T11:52:09.963088Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-05-20T11:52:10.283822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-20T11:52:10.283882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-20T11:52:10.283915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgPreVoteResp from 8fbc2df34e14192d at term 1"}
	{"level":"info","ts":"2024-05-20T11:52:10.283928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T11:52:10.283935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgVoteResp from 8fbc2df34e14192d at term 2"}
	{"level":"info","ts":"2024-05-20T11:52:10.283942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 2"}
	{"level":"info","ts":"2024-05-20T11:52:10.283949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 2"}
	{"level":"info","ts":"2024-05-20T11:52:10.288871Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:52:10.293033Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:no-preload-814599 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T11:52:10.29382Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:52:10.304976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	{"level":"info","ts":"2024-05-20T11:52:10.305386Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:52:10.305478Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:52:10.305522Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T11:52:10.305534Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T11:52:10.318434Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T11:52:10.324828Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T11:52:10.324904Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-20T12:02:10.711955Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":712}
	{"level":"info","ts":"2024-05-20T12:02:10.720987Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":712,"took":"8.702399ms","hash":1305532980,"current-db-size-bytes":2154496,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2154496,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-05-20T12:02:10.721056Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1305532980,"revision":712,"compact-revision":-1}
	
	
	==> kernel <==
	 12:06:59 up 20 min,  0 users,  load average: 0.22, 0.28, 0.17
	Linux no-preload-814599 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [970dc977aa55a0ff29055628d97843b7a29d0fcee72db7dce380c35ded7a50d6] <==
	I0520 12:00:13.421037       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:02:12.423963       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:02:12.424117       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0520 12:02:13.425139       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:02:13.425181       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 12:02:13.425193       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:02:13.425219       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:02:13.425306       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 12:02:13.426571       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:03:13.426207       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:03:13.426410       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 12:03:13.426451       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:03:13.427804       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:03:13.427925       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 12:03:13.427954       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:05:13.427019       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:05:13.427323       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0520 12:05:13.427358       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0520 12:05:13.428191       1 handler_proxy.go:93] no RequestInfo found in the context
	E0520 12:05:13.428292       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 12:05:13.428378       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c62ae52ced8bb41a38ab19762cb1fa3542762a412bd93dc6a09811a145f8e292] <==
	I0520 12:01:28.481149       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:01:58.001147       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:01:58.489533       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:02:28.006513       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:02:28.497883       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:02:58.011939       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:02:58.506317       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:03:28.018644       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:03:28.517834       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0520 12:03:29.235378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="326.157µs"
	I0520 12:03:41.231526       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="79.36µs"
	E0520 12:03:58.023437       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:03:58.526495       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:04:28.029649       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:04:28.535283       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:04:58.034561       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:04:58.543658       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:05:28.041721       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:05:28.554674       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:05:58.047642       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:05:58.563014       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:06:28.053311       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:06:28.573971       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0520 12:06:58.058159       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0520 12:06:58.583508       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [852810a24d4b89497902fc14dde5ddc04cb4cdd9b7979371a2cf1a7213fd5edb] <==
	I0520 11:52:30.257188       1 server_linux.go:69] "Using iptables proxy"
	I0520 11:52:30.453297       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	I0520 11:52:30.621879       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0520 11:52:30.621919       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0520 11:52:30.621934       1 server_linux.go:165] "Using iptables Proxier"
	I0520 11:52:30.625363       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 11:52:30.625646       1 server.go:872] "Version info" version="v1.30.1"
	I0520 11:52:30.626075       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 11:52:30.628083       1 config.go:192] "Starting service config controller"
	I0520 11:52:30.628166       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 11:52:30.628902       1 config.go:101] "Starting endpoint slice config controller"
	I0520 11:52:30.628993       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 11:52:30.630312       1 config.go:319] "Starting node config controller"
	I0520 11:52:30.630348       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 11:52:30.729054       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 11:52:30.729096       1 shared_informer.go:320] Caches are synced for service config
	I0520 11:52:30.730592       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [10ba469b3ff42a4f8c41b04337320a7e0c135fa756b3360fe36653a608d75d88] <==
	W0520 11:52:12.455933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 11:52:12.455966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 11:52:12.456090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:52:12.456123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 11:52:12.456247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 11:52:12.456325       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 11:52:13.346932       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:52:13.347001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 11:52:13.355335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:52:13.355383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0520 11:52:13.365679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:52:13.367285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 11:52:13.366579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:52:13.367397       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 11:52:13.376069       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 11:52:13.376173       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 11:52:13.382356       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 11:52:13.382418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 11:52:13.385383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 11:52:13.385586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 11:52:13.387291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 11:52:13.387431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0520 11:52:13.840570       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 11:52:13.840700       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0520 11:52:15.747232       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 12:04:15 no-preload-814599 kubelet[4334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:04:15 no-preload-814599 kubelet[4334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:04:21 no-preload-814599 kubelet[4334]: E0520 12:04:21.214084    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:04:35 no-preload-814599 kubelet[4334]: E0520 12:04:35.215628    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:04:48 no-preload-814599 kubelet[4334]: E0520 12:04:48.214069    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:05:01 no-preload-814599 kubelet[4334]: E0520 12:05:01.215172    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:05:13 no-preload-814599 kubelet[4334]: E0520 12:05:13.214669    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:05:15 no-preload-814599 kubelet[4334]: E0520 12:05:15.234020    4334 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:05:15 no-preload-814599 kubelet[4334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:05:15 no-preload-814599 kubelet[4334]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:05:15 no-preload-814599 kubelet[4334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:05:15 no-preload-814599 kubelet[4334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:05:25 no-preload-814599 kubelet[4334]: E0520 12:05:25.215428    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:05:40 no-preload-814599 kubelet[4334]: E0520 12:05:40.213819    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:05:54 no-preload-814599 kubelet[4334]: E0520 12:05:54.213912    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:06:06 no-preload-814599 kubelet[4334]: E0520 12:06:06.213178    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:06:15 no-preload-814599 kubelet[4334]: E0520 12:06:15.232958    4334 iptables.go:577] "Could not set up iptables canary" err=<
	May 20 12:06:15 no-preload-814599 kubelet[4334]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 20 12:06:15 no-preload-814599 kubelet[4334]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 20 12:06:15 no-preload-814599 kubelet[4334]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 20 12:06:15 no-preload-814599 kubelet[4334]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 20 12:06:18 no-preload-814599 kubelet[4334]: E0520 12:06:18.213493    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:06:29 no-preload-814599 kubelet[4334]: E0520 12:06:29.213368    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:06:40 no-preload-814599 kubelet[4334]: E0520 12:06:40.214133    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	May 20 12:06:51 no-preload-814599 kubelet[4334]: E0520 12:06:51.213974    4334 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lggvs" podUID="72eeb6ee-27ce-4485-b364-5c9b87283d29"
	
	
	==> storage-provisioner [0b10ea977d0579b15e4c976dff35f3fcab52bf2c203dac63d829d0c02b894748] <==
	I0520 11:52:30.795684       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 11:52:30.820684       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 11:52:30.821062       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 11:52:30.835193       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 11:52:30.835579       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-814599_bc915668-1408-43bb-9ba0-6078a3bf8d32!
	I0520 11:52:30.836666       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca28e41c-3685-4dce-abaa-1fcd2de91890", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-814599_bc915668-1408-43bb-9ba0-6078a3bf8d32 became leader
	I0520 11:52:30.936173       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-814599_bc915668-1408-43bb-9ba0-6078a3bf8d32!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-814599 -n no-preload-814599
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-814599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-lggvs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-814599 describe pod metrics-server-569cc877fc-lggvs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-814599 describe pod metrics-server-569cc877fc-lggvs: exit status 1 (65.722448ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-lggvs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-814599 describe pod metrics-server-569cc877fc-lggvs: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (322.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (203.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
E0520 12:04:54.309487   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
E0520 12:05:22.239652   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.100:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.100:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210420 -n old-k8s-version-210420
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 2 (218.703987ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-210420" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-210420 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-210420 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.496µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-210420 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 2 (225.973943ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-210420 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-210420 logs -n 25: (1.622013466s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-854547                           | kubernetes-upgrade-854547    | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:36 UTC |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:36 UTC | 20 May 24 11:38 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-814599             | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-120159                                        | pause-120159                 | jenkins | v1.33.1 | 20 May 24 11:38 UTC | 20 May 24 11:39 UTC |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-355991                              | cert-expiration-355991       | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-793579 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:39 UTC |
	|         | disable-driver-mounts-793579                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:39 UTC | 20 May 24 11:41 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274976            | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-814599                  | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-814599                                   | no-preload-814599            | jenkins | v1.33.1 | 20 May 24 11:40 UTC | 20 May 24 11:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210420        | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-668445  | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC | 20 May 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:41 UTC |                     |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274976                 | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210420             | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210420                              | old-k8s-version-210420       | jenkins | v1.33.1 | 20 May 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274976                                  | embed-certs-274976           | jenkins | v1.33.1 | 20 May 24 11:42 UTC | 20 May 24 11:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-668445       | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-668445 | jenkins | v1.33.1 | 20 May 24 11:44 UTC | 20 May 24 11:51 UTC |
	|         | default-k8s-diff-port-668445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:44:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:44:01.940145   58840 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:44:01.940398   58840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:44:01.940409   58840 out.go:304] Setting ErrFile to fd 2...
	I0520 11:44:01.940415   58840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:44:01.940633   58840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:44:01.941201   58840 out.go:298] Setting JSON to false
	I0520 11:44:01.942056   58840 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5185,"bootTime":1716200257,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:44:01.942118   58840 start.go:139] virtualization: kvm guest
	I0520 11:44:01.944331   58840 out.go:177] * [default-k8s-diff-port-668445] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:44:01.945948   58840 notify.go:220] Checking for updates...
	I0520 11:44:01.945956   58840 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:44:01.947257   58840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:44:01.948525   58840 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:44:01.949881   58840 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:44:01.951156   58840 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:44:01.952736   58840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:44:01.954565   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:44:01.954932   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:44:01.954979   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:44:01.969623   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0520 11:44:01.969984   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:44:01.970444   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:44:01.970489   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:44:01.970816   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:44:01.971013   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:44:01.971212   58840 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:44:01.971493   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:44:01.971540   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:44:01.985556   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0520 11:44:01.985885   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:44:01.986365   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:44:01.986388   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:44:01.986707   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:44:01.986883   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:44:02.021778   58840 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 11:44:02.022944   58840 start.go:297] selected driver: kvm2
	I0520 11:44:02.022954   58840 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:44:02.023042   58840 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:44:02.023717   58840 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:44:02.023796   58840 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 11:44:02.038516   58840 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 11:44:02.038900   58840 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:44:02.038931   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:44:02.038942   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:44:02.038989   58840 start.go:340] cluster config:
	{Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:44:02.039097   58840 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:44:02.041645   58840 out.go:177] * Starting "default-k8s-diff-port-668445" primary control-plane node in "default-k8s-diff-port-668445" cluster
	I0520 11:44:02.042785   58840 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:44:02.042826   58840 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 11:44:02.042839   58840 cache.go:56] Caching tarball of preloaded images
	I0520 11:44:02.042917   58840 preload.go:173] Found /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0520 11:44:02.042929   58840 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0520 11:44:02.043041   58840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:44:02.043267   58840 start.go:360] acquireMachinesLock for default-k8s-diff-port-668445: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:44:06.605177   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:09.677144   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:15.757188   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:18.829221   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:24.909165   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:27.981180   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:34.061180   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:37.133203   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:43.213225   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:46.285226   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:52.365216   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:44:55.437121   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:01.517184   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:04.589206   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:10.669220   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:13.741264   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:19.821209   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:22.893127   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:28.973155   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:32.045217   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:41.129769   58316 start.go:364] duration metric: took 3m9.329208681s to acquireMachinesLock for "old-k8s-version-210420"
	I0520 11:45:41.129841   58316 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:45:41.129847   58316 fix.go:54] fixHost starting: 
	I0520 11:45:41.130178   58316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:45:41.130209   58316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:45:41.145527   58316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34005
	I0520 11:45:41.145974   58316 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:45:41.146470   58316 main.go:141] libmachine: Using API Version  1
	I0520 11:45:41.146491   58316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:45:41.146803   58316 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:45:41.146962   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:41.147096   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetState
	I0520 11:45:41.148653   58316 fix.go:112] recreateIfNeeded on old-k8s-version-210420: state=Stopped err=<nil>
	I0520 11:45:41.148678   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	W0520 11:45:41.148835   58316 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:45:41.150515   58316 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210420" ...
	I0520 11:45:38.125195   57519 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.185:22: connect: no route to host
	I0520 11:45:41.127376   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:41.127411   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:45:41.127706   57519 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:45:41.127739   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:45:41.127954   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:45:41.129638   57519 machine.go:97] duration metric: took 4m44.663213682s to provisionDockerMachine
	I0520 11:45:41.129681   57519 fix.go:56] duration metric: took 4m44.685919857s for fixHost
	I0520 11:45:41.129692   57519 start.go:83] releasing machines lock for "no-preload-814599", held for 4m44.68594997s
	W0520 11:45:41.129717   57519 start.go:713] error starting host: provision: host is not running
	W0520 11:45:41.129807   57519 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0520 11:45:41.129817   57519 start.go:728] Will try again in 5 seconds ...
	I0520 11:45:41.151697   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .Start
	I0520 11:45:41.151882   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring networks are active...
	I0520 11:45:41.152702   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network default is active
	I0520 11:45:41.153047   58316 main.go:141] libmachine: (old-k8s-version-210420) Ensuring network mk-old-k8s-version-210420 is active
	I0520 11:45:41.153452   58316 main.go:141] libmachine: (old-k8s-version-210420) Getting domain xml...
	I0520 11:45:41.154219   58316 main.go:141] libmachine: (old-k8s-version-210420) Creating domain...
	I0520 11:45:46.134344   57519 start.go:360] acquireMachinesLock for no-preload-814599: {Name:mk8938b3cbc1430dfda9af859ea7269768f74090 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0520 11:45:42.344294   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting to get IP...
	I0520 11:45:42.345102   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.345496   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.345567   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.345490   59217 retry.go:31] will retry after 272.13676ms: waiting for machine to come up
	I0520 11:45:42.619253   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.619676   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.619706   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.619633   59217 retry.go:31] will retry after 374.178023ms: waiting for machine to come up
	I0520 11:45:42.995376   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:42.995793   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:42.995819   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:42.995748   59217 retry.go:31] will retry after 462.060487ms: waiting for machine to come up
	I0520 11:45:43.459411   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.459824   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.459860   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.459791   59217 retry.go:31] will retry after 479.382552ms: waiting for machine to come up
	I0520 11:45:43.940363   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:43.940850   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:43.940873   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:43.940802   59217 retry.go:31] will retry after 633.680965ms: waiting for machine to come up
	I0520 11:45:44.575531   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:44.575957   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:44.575985   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:44.575915   59217 retry.go:31] will retry after 755.049677ms: waiting for machine to come up
	I0520 11:45:45.332864   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:45.333335   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:45.333368   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:45.333292   59217 retry.go:31] will retry after 757.292563ms: waiting for machine to come up
	I0520 11:45:46.092128   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:46.092583   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:46.092605   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:46.092544   59217 retry.go:31] will retry after 1.150182695s: waiting for machine to come up
	I0520 11:45:47.244071   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:47.244526   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:47.244588   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:47.244474   59217 retry.go:31] will retry after 1.316447926s: waiting for machine to come up
	I0520 11:45:48.562914   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:48.563296   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:48.563326   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:48.563240   59217 retry.go:31] will retry after 1.615920757s: waiting for machine to come up
	I0520 11:45:50.180999   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:50.181361   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:50.181395   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:50.181336   59217 retry.go:31] will retry after 2.441112889s: waiting for machine to come up
	I0520 11:45:52.624374   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:52.624739   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:52.624771   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:52.624722   59217 retry.go:31] will retry after 2.505225423s: waiting for machine to come up
	I0520 11:45:55.133339   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:55.133626   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | unable to find current IP address of domain old-k8s-version-210420 in network mk-old-k8s-version-210420
	I0520 11:45:55.133655   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | I0520 11:45:55.133596   59217 retry.go:31] will retry after 4.197262735s: waiting for machine to come up
	I0520 11:46:00.745820   58398 start.go:364] duration metric: took 3m22.67445444s to acquireMachinesLock for "embed-certs-274976"
	I0520 11:46:00.745877   58398 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:00.745887   58398 fix.go:54] fixHost starting: 
	I0520 11:46:00.746287   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:00.746320   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:00.762678   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45595
	I0520 11:46:00.763132   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:00.763654   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:00.763681   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:00.763991   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:00.764154   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:00.764308   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:00.765826   58398 fix.go:112] recreateIfNeeded on embed-certs-274976: state=Stopped err=<nil>
	I0520 11:46:00.765861   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	W0520 11:46:00.766017   58398 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:00.768362   58398 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274976" ...
	I0520 11:45:59.334216   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334678   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has current primary IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.334700   58316 main.go:141] libmachine: (old-k8s-version-210420) Found IP for machine: 192.168.83.100
	I0520 11:45:59.334712   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserving static IP address...
	I0520 11:45:59.335134   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.335166   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | skip adding static IP to network mk-old-k8s-version-210420 - found existing host DHCP lease matching {name: "old-k8s-version-210420", mac: "52:54:00:69:43:1a", ip: "192.168.83.100"}
	I0520 11:45:59.335180   58316 main.go:141] libmachine: (old-k8s-version-210420) Reserved static IP address: 192.168.83.100
	I0520 11:45:59.335197   58316 main.go:141] libmachine: (old-k8s-version-210420) Waiting for SSH to be available...
	I0520 11:45:59.335212   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Getting to WaitForSSH function...
	I0520 11:45:59.337206   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337476   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.337501   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.337634   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH client type: external
	I0520 11:45:59.337681   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa (-rw-------)
	I0520 11:45:59.337716   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:45:59.337732   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | About to run SSH command:
	I0520 11:45:59.337743   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | exit 0
	I0520 11:45:59.460960   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | SSH cmd err, output: <nil>: 
	I0520 11:45:59.461347   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetConfigRaw
	I0520 11:45:59.461978   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.464416   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.464734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.464756   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.465018   58316 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/config.json ...
	I0520 11:45:59.465183   58316 machine.go:94] provisionDockerMachine start ...
	I0520 11:45:59.465201   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:45:59.465416   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.467457   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467751   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.467782   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.467892   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.468050   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468204   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.468330   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.468474   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.468655   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.468665   58316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:45:59.573237   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:45:59.573264   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573558   58316 buildroot.go:166] provisioning hostname "old-k8s-version-210420"
	I0520 11:45:59.573586   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.573758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.576245   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576734   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.576770   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.576778   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.576957   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577122   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.577267   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.577413   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.577620   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.577639   58316 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210420 && echo "old-k8s-version-210420" | sudo tee /etc/hostname
	I0520 11:45:59.695529   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210420
	
	I0520 11:45:59.695573   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.698545   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698808   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.698835   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.698964   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:45:59.699144   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699331   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:45:59.699427   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:45:59.699594   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:45:59.699744   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:45:59.699767   58316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210420/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:45:59.814736   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:45:59.814765   58316 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:45:59.814820   58316 buildroot.go:174] setting up certificates
	I0520 11:45:59.814832   58316 provision.go:84] configureAuth start
	I0520 11:45:59.814841   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetMachineName
	I0520 11:45:59.815158   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:45:59.817591   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.817917   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.817946   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.818103   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:45:59.820205   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820474   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:45:59.820499   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:45:59.820625   58316 provision.go:143] copyHostCerts
	I0520 11:45:59.820681   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:45:59.820698   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:45:59.820782   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:45:59.820887   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:45:59.820896   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:45:59.820921   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:45:59.821006   58316 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:45:59.821015   58316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:45:59.821038   58316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:45:59.821092   58316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210420 san=[127.0.0.1 192.168.83.100 localhost minikube old-k8s-version-210420]
	I0520 11:46:00.093158   58316 provision.go:177] copyRemoteCerts
	I0520 11:46:00.093218   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:00.093251   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.095725   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096033   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.096055   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.096180   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.096371   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.096549   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.096693   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.179313   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:00.203235   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 11:46:00.226670   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:00.250832   58316 provision.go:87] duration metric: took 435.987724ms to configureAuth
	I0520 11:46:00.250868   58316 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:00.251054   58316 config.go:182] Loaded profile config "old-k8s-version-210420": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:46:00.251183   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.253679   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.253986   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.254020   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.254118   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.254305   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254489   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.254626   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.254777   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.255000   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.255023   58316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:00.512174   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:00.512207   58316 machine.go:97] duration metric: took 1.047011014s to provisionDockerMachine
	I0520 11:46:00.512222   58316 start.go:293] postStartSetup for "old-k8s-version-210420" (driver="kvm2")
	I0520 11:46:00.512236   58316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:00.512259   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.512546   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:00.512577   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.515025   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515358   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.515386   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.515512   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.515696   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.515879   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.516004   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.599956   58316 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:00.604374   58316 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:00.604399   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:00.604486   58316 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:00.604575   58316 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:00.604704   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:00.614371   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:00.638093   58316 start.go:296] duration metric: took 125.858293ms for postStartSetup
	I0520 11:46:00.638127   58316 fix.go:56] duration metric: took 19.508279908s for fixHost
	I0520 11:46:00.638151   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.640770   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641148   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.641192   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.641352   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.641539   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641702   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.641844   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.641985   58316 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:00.642192   58316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.83.100 22 <nil> <nil>}
	I0520 11:46:00.642207   58316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:00.745686   58316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205560.721975623
	
	I0520 11:46:00.745708   58316 fix.go:216] guest clock: 1716205560.721975623
	I0520 11:46:00.745716   58316 fix.go:229] Guest: 2024-05-20 11:46:00.721975623 +0000 UTC Remote: 2024-05-20 11:46:00.638130573 +0000 UTC m=+208.973304879 (delta=83.84505ms)
	I0520 11:46:00.745736   58316 fix.go:200] guest clock delta is within tolerance: 83.84505ms
	I0520 11:46:00.745740   58316 start.go:83] releasing machines lock for "old-k8s-version-210420", held for 19.615924008s
	I0520 11:46:00.745762   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.746032   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:00.748741   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749123   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.749146   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.749288   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749758   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.749932   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .DriverName
	I0520 11:46:00.750021   58316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:00.750070   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.750200   58316 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:00.750225   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHHostname
	I0520 11:46:00.752648   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.752923   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753019   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753045   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753156   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753244   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:00.753273   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:00.753343   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753420   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHPort
	I0520 11:46:00.753519   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753582   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHKeyPath
	I0520 11:46:00.753650   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	I0520 11:46:00.753744   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetSSHUsername
	I0520 11:46:00.753879   58316 sshutil.go:53] new ssh client: &{IP:192.168.83.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/old-k8s-version-210420/id_rsa Username:docker}
	W0520 11:46:00.839633   58316 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:00.839741   58316 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:00.861801   58316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:01.011737   58316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:01.018898   58316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:01.018964   58316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:01.035600   58316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:01.035622   58316 start.go:494] detecting cgroup driver to use...
	I0520 11:46:01.035698   58316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:01.051692   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:01.065994   58316 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:01.066083   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:01.079249   58316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:01.093862   58316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:01.206394   58316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:01.362764   58316 docker.go:233] disabling docker service ...
	I0520 11:46:01.362842   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:01.378012   58316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:01.393231   58316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:01.535932   58316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:01.677217   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:01.691602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:01.716408   58316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0520 11:46:01.716457   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.729460   58316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:01.729541   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.740455   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.751846   58316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:01.763174   58316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:01.775353   58316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:01.786021   58316 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:01.786088   58316 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:01.801529   58316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:01.812523   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:01.927496   58316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:02.084059   58316 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:02.084137   58316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:02.089878   58316 start.go:562] Will wait 60s for crictl version
	I0520 11:46:02.089925   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:02.094050   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:02.134859   58316 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:02.134936   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.166705   58316 ssh_runner.go:195] Run: crio --version
	I0520 11:46:02.196374   58316 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0520 11:46:00.769810   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Start
	I0520 11:46:00.769988   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring networks are active...
	I0520 11:46:00.770615   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring network default is active
	I0520 11:46:00.770928   58398 main.go:141] libmachine: (embed-certs-274976) Ensuring network mk-embed-certs-274976 is active
	I0520 11:46:00.771273   58398 main.go:141] libmachine: (embed-certs-274976) Getting domain xml...
	I0520 11:46:00.771977   58398 main.go:141] libmachine: (embed-certs-274976) Creating domain...
	I0520 11:46:02.023816   58398 main.go:141] libmachine: (embed-certs-274976) Waiting to get IP...
	I0520 11:46:02.024616   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.025031   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.025111   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.025009   59335 retry.go:31] will retry after 218.274206ms: waiting for machine to come up
	I0520 11:46:02.244661   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.245135   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.245163   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.245110   59335 retry.go:31] will retry after 359.160276ms: waiting for machine to come up
	I0520 11:46:02.606589   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:02.607106   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:02.607129   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:02.607044   59335 retry.go:31] will retry after 462.479953ms: waiting for machine to come up
	I0520 11:46:02.197655   58316 main.go:141] libmachine: (old-k8s-version-210420) Calling .GetIP
	I0520 11:46:02.200774   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201186   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:43:1a", ip: ""} in network mk-old-k8s-version-210420: {Iface:virbr4 ExpiryTime:2024-05-20 12:45:51 +0000 UTC Type:0 Mac:52:54:00:69:43:1a Iaid: IPaddr:192.168.83.100 Prefix:24 Hostname:old-k8s-version-210420 Clientid:01:52:54:00:69:43:1a}
	I0520 11:46:02.201211   58316 main.go:141] libmachine: (old-k8s-version-210420) DBG | domain old-k8s-version-210420 has defined IP address 192.168.83.100 and MAC address 52:54:00:69:43:1a in network mk-old-k8s-version-210420
	I0520 11:46:02.201385   58316 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:02.205589   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:02.217983   58316 kubeadm.go:877] updating cluster {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:02.218091   58316 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 11:46:02.218135   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:02.265907   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:02.265970   58316 ssh_runner.go:195] Run: which lz4
	I0520 11:46:02.271466   58316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:02.276993   58316 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:02.277017   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0520 11:46:04.082571   58316 crio.go:462] duration metric: took 1.811145307s to copy over tarball
	I0520 11:46:04.082648   58316 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:03.071686   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:03.072242   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:03.072271   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:03.072198   59335 retry.go:31] will retry after 531.663033ms: waiting for machine to come up
	I0520 11:46:03.606121   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:03.606629   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:03.606653   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:03.606581   59335 retry.go:31] will retry after 690.866851ms: waiting for machine to come up
	I0520 11:46:04.299326   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:04.299799   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:04.299829   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:04.299750   59335 retry.go:31] will retry after 928.456543ms: waiting for machine to come up
	I0520 11:46:05.230115   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:05.230568   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:05.230586   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:05.230539   59335 retry.go:31] will retry after 761.597702ms: waiting for machine to come up
	I0520 11:46:05.993511   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:05.993897   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:05.993930   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:05.993854   59335 retry.go:31] will retry after 1.405329972s: waiting for machine to come up
	I0520 11:46:07.401706   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:07.402246   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:07.402312   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:07.402223   59335 retry.go:31] will retry after 1.717177668s: waiting for machine to come up
	I0520 11:46:06.952510   58316 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.869830998s)
	I0520 11:46:06.952533   58316 crio.go:469] duration metric: took 2.869936445s to extract the tarball
	I0520 11:46:06.952541   58316 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:06.996387   58316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:07.032067   58316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0520 11:46:07.032093   58316 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:46:07.032230   58316 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.032216   58316 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.032217   58316 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0520 11:46:07.032350   58316 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.032411   58316 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.032692   58316 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.033198   58316 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034239   58316 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.034327   58316 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0520 11:46:07.034404   58316 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.034411   58316 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.034416   58316 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.034442   58316 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.207835   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.209772   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0520 11:46:07.278794   58316 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0520 11:46:07.278843   58316 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.278895   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.284049   58316 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0520 11:46:07.284098   58316 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0520 11:46:07.284149   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.286345   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0520 11:46:07.289361   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0520 11:46:07.331714   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0520 11:46:07.337994   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0520 11:46:07.373533   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.380715   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.385188   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.390552   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.410661   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.481535   58316 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0520 11:46:07.481576   58316 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.481626   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494357   58316 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0520 11:46:07.494374   58316 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0520 11:46:07.494402   58316 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.494404   58316 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.494451   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.507981   58316 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0520 11:46:07.508023   58316 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.508069   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.516898   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0520 11:46:07.517006   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0520 11:46:07.517019   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0520 11:46:07.517060   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0520 11:46:07.517248   58316 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0520 11:46:07.517282   58316 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.517309   58316 ssh_runner.go:195] Run: which crictl
	I0520 11:46:07.581093   58316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0520 11:46:07.581200   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0520 11:46:07.622963   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0520 11:46:07.623008   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0520 11:46:07.623069   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0520 11:46:07.632151   58316 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0520 11:46:07.915373   58316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:08.058938   58316 cache_images.go:92] duration metric: took 1.026816798s to LoadCachedImages
	W0520 11:46:08.059038   58316 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0520 11:46:08.059057   58316 kubeadm.go:928] updating node { 192.168.83.100 8443 v1.20.0 crio true true} ...
	I0520 11:46:08.059208   58316 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210420 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:08.059295   58316 ssh_runner.go:195] Run: crio config
	I0520 11:46:08.131486   58316 cni.go:84] Creating CNI manager for ""
	I0520 11:46:08.131511   58316 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:08.131532   58316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:08.131561   58316 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.100 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210420 NodeName:old-k8s-version-210420 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:46:08.131724   58316 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210420"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:08.131800   58316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:46:08.143109   58316 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:08.143202   58316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:08.153988   58316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0520 11:46:08.172321   58316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:08.190157   58316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0520 11:46:08.208084   58316 ssh_runner.go:195] Run: grep 192.168.83.100	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:08.212211   58316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:08.225134   58316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:08.350083   58316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:08.368185   58316 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420 for IP: 192.168.83.100
	I0520 11:46:08.368202   58316 certs.go:194] generating shared ca certs ...
	I0520 11:46:08.368216   58316 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.368372   58316 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:08.368422   58316 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:08.368435   58316 certs.go:256] generating profile certs ...
	I0520 11:46:08.368552   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.key
	I0520 11:46:08.368620   58316 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key.a7d5d0f9
	I0520 11:46:08.368664   58316 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key
	I0520 11:46:08.368806   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:08.368839   58316 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:08.368852   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:08.368884   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:08.368914   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:08.368967   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:08.369020   58316 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:08.369674   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:08.402860   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:08.432963   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:08.458740   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:08.490622   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 11:46:08.529627   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:08.563064   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:08.604332   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0520 11:46:08.632006   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:08.656708   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:08.680552   58316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:08.704942   58316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:08.723543   58316 ssh_runner.go:195] Run: openssl version
	I0520 11:46:08.729776   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:08.742608   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747374   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.747417   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:08.753504   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:08.765976   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:08.777365   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781823   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.781879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:08.787550   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:08.798382   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:08.810174   58316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814810   58316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.814879   58316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:08.820897   58316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:08.832012   58316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:08.836507   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:08.842727   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:08.849208   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:08.855730   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:08.862050   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:08.867819   58316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:08.873633   58316 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210420 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:08.873750   58316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:08.873800   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.912922   58316 cri.go:89] found id: ""
	I0520 11:46:08.913010   58316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:08.924046   58316 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:08.924084   58316 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:08.924092   58316 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:08.924154   58316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:08.934812   58316 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:08.936218   58316 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210420" does not appear in /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:08.937232   58316 kubeconfig.go:62] /home/jenkins/minikube-integration/18925-3819/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210420" cluster setting kubeconfig missing "old-k8s-version-210420" context setting]
	I0520 11:46:08.938681   58316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:08.947598   58316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:08.958245   58316 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.100
	I0520 11:46:08.958280   58316 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:08.958293   58316 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:08.958349   58316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:08.996333   58316 cri.go:89] found id: ""
	I0520 11:46:08.996413   58316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:09.015359   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:09.026711   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:09.026733   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:09.026784   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:09.036886   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:09.036954   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:09.046584   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:09.056035   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:09.056095   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:09.067501   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.077064   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:09.077123   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:09.086820   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:09.096267   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:09.096346   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:09.106407   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:09.116143   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.240367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:09.968603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.251776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.363870   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:10.466177   58316 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:10.466284   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:10.967012   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:11.467113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:09.121909   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:09.122323   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:09.122358   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:09.122261   59335 retry.go:31] will retry after 1.49678652s: waiting for machine to come up
	I0520 11:46:10.620875   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:10.621226   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:10.621255   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:10.621184   59335 retry.go:31] will retry after 2.772139112s: waiting for machine to come up
	I0520 11:46:11.966365   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.466509   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:12.966500   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.467280   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.966838   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.466319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:14.966768   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.466848   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:15.966953   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:16.466890   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:13.396309   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:13.396705   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:13.396725   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:13.396678   59335 retry.go:31] will retry after 2.987909039s: waiting for machine to come up
	I0520 11:46:16.386007   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:16.386500   58398 main.go:141] libmachine: (embed-certs-274976) DBG | unable to find current IP address of domain embed-certs-274976 in network mk-embed-certs-274976
	I0520 11:46:16.386527   58398 main.go:141] libmachine: (embed-certs-274976) DBG | I0520 11:46:16.386460   59335 retry.go:31] will retry after 4.351760111s: waiting for machine to come up
	I0520 11:46:16.967036   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.466679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:17.966631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.466439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:18.966783   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:19.967319   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.466315   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:20.966513   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:21.467069   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.138009   58840 start.go:364] duration metric: took 2m20.094710429s to acquireMachinesLock for "default-k8s-diff-port-668445"
	I0520 11:46:22.138085   58840 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:22.138096   58840 fix.go:54] fixHost starting: 
	I0520 11:46:22.138535   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:22.138577   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:22.154345   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0520 11:46:22.154701   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:22.155172   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:22.155198   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:22.155492   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:22.155667   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:22.155834   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:22.157486   58840 fix.go:112] recreateIfNeeded on default-k8s-diff-port-668445: state=Stopped err=<nil>
	I0520 11:46:22.157525   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	W0520 11:46:22.157703   58840 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:22.159724   58840 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-668445" ...
	I0520 11:46:20.742954   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.743480   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has current primary IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.743503   58398 main.go:141] libmachine: (embed-certs-274976) Found IP for machine: 192.168.50.167
	I0520 11:46:20.743517   58398 main.go:141] libmachine: (embed-certs-274976) Reserving static IP address...
	I0520 11:46:20.743907   58398 main.go:141] libmachine: (embed-certs-274976) Reserved static IP address: 192.168.50.167
	I0520 11:46:20.743930   58398 main.go:141] libmachine: (embed-certs-274976) Waiting for SSH to be available...
	I0520 11:46:20.743955   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "embed-certs-274976", mac: "52:54:00:8b:95:29", ip: "192.168.50.167"} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.744000   58398 main.go:141] libmachine: (embed-certs-274976) DBG | skip adding static IP to network mk-embed-certs-274976 - found existing host DHCP lease matching {name: "embed-certs-274976", mac: "52:54:00:8b:95:29", ip: "192.168.50.167"}
	I0520 11:46:20.744017   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Getting to WaitForSSH function...
	I0520 11:46:20.746685   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.747071   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.747112   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.747244   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Using SSH client type: external
	I0520 11:46:20.747259   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa (-rw-------)
	I0520 11:46:20.747278   58398 main.go:141] libmachine: (embed-certs-274976) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:46:20.747287   58398 main.go:141] libmachine: (embed-certs-274976) DBG | About to run SSH command:
	I0520 11:46:20.747295   58398 main.go:141] libmachine: (embed-certs-274976) DBG | exit 0
	I0520 11:46:20.872997   58398 main.go:141] libmachine: (embed-certs-274976) DBG | SSH cmd err, output: <nil>: 
	I0520 11:46:20.873377   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetConfigRaw
	I0520 11:46:20.874039   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:20.876476   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.876827   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.876859   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.877169   58398 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/config.json ...
	I0520 11:46:20.877407   58398 machine.go:94] provisionDockerMachine start ...
	I0520 11:46:20.877432   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:20.877643   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:20.879744   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.880089   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.880113   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.880328   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:20.880529   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.880673   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.880840   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:20.881034   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:20.881263   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:20.881275   58398 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:46:20.993535   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:46:20.993572   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:20.993813   58398 buildroot.go:166] provisioning hostname "embed-certs-274976"
	I0520 11:46:20.993841   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:20.994034   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:20.996567   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.996922   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:20.996970   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:20.997092   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:20.997280   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.997437   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:20.997580   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:20.997734   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:20.997993   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:20.998013   58398 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274976 && echo "embed-certs-274976" | sudo tee /etc/hostname
	I0520 11:46:21.125482   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274976
	
	I0520 11:46:21.125512   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.128347   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.128717   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.128746   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.128916   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.129120   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.129296   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.129436   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.129613   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:21.129774   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:21.129789   58398 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274976/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:46:21.250287   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:46:21.250320   58398 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:46:21.250371   58398 buildroot.go:174] setting up certificates
	I0520 11:46:21.250385   58398 provision.go:84] configureAuth start
	I0520 11:46:21.250403   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetMachineName
	I0520 11:46:21.250726   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:21.253610   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.254038   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.254053   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.254303   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.256633   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.257007   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.257046   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.257206   58398 provision.go:143] copyHostCerts
	I0520 11:46:21.257266   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:46:21.257289   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:46:21.257369   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:46:21.257489   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:46:21.257499   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:46:21.257537   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:46:21.257617   58398 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:46:21.257626   58398 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:46:21.257661   58398 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:46:21.257729   58398 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274976 san=[127.0.0.1 192.168.50.167 embed-certs-274976 localhost minikube]
	I0520 11:46:21.458096   58398 provision.go:177] copyRemoteCerts
	I0520 11:46:21.458152   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:21.458178   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.460662   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.461006   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.461042   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.461205   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.461370   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.461545   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.461705   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:21.547703   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:21.574345   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0520 11:46:21.598843   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:21.624064   58398 provision.go:87] duration metric: took 373.664517ms to configureAuth
	I0520 11:46:21.624091   58398 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:21.624262   58398 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:21.624330   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.626996   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.627378   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.627406   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.627556   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.627711   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.627902   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.628077   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.628231   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:21.628402   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:21.628420   58398 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:21.894896   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:21.894923   58398 machine.go:97] duration metric: took 1.017499149s to provisionDockerMachine
	I0520 11:46:21.894936   58398 start.go:293] postStartSetup for "embed-certs-274976" (driver="kvm2")
	I0520 11:46:21.894948   58398 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:21.894966   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:21.895279   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:21.895304   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:21.897866   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.898160   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:21.898193   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:21.898292   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:21.898484   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:21.898663   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:21.898779   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:21.984571   58398 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:21.988891   58398 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:21.988918   58398 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:21.988995   58398 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:21.989129   58398 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:21.989257   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:21.999219   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:22.023032   58398 start.go:296] duration metric: took 128.082095ms for postStartSetup
	I0520 11:46:22.023076   58398 fix.go:56] duration metric: took 21.277188857s for fixHost
	I0520 11:46:22.023094   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.025971   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.026362   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.026393   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.026536   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.026725   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.026867   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.027017   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.027246   58398 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:22.027424   58398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0520 11:46:22.027437   58398 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:22.137813   58398 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205582.113779638
	
	I0520 11:46:22.137840   58398 fix.go:216] guest clock: 1716205582.113779638
	I0520 11:46:22.137850   58398 fix.go:229] Guest: 2024-05-20 11:46:22.113779638 +0000 UTC Remote: 2024-05-20 11:46:22.023079043 +0000 UTC m=+224.083510648 (delta=90.700595ms)
	I0520 11:46:22.137898   58398 fix.go:200] guest clock delta is within tolerance: 90.700595ms
	I0520 11:46:22.137906   58398 start.go:83] releasing machines lock for "embed-certs-274976", held for 21.39205527s
	I0520 11:46:22.137943   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.138222   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:22.140951   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.141358   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.141388   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.141558   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142024   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142187   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:22.142245   58398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:22.142291   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.142395   58398 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:22.142417   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:22.144896   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145116   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145386   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.145419   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145490   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:22.145518   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:22.145525   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.145647   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:22.145703   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.145827   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:22.145849   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.146019   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:22.146022   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:22.146175   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	W0520 11:46:22.226345   58398 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:22.226421   58398 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:22.248678   58398 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:22.404298   58398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:22.410174   58398 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:22.410249   58398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:22.426286   58398 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:22.426311   58398 start.go:494] detecting cgroup driver to use...
	I0520 11:46:22.426369   58398 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:22.441746   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:22.455445   58398 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:22.455515   58398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:22.469441   58398 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:22.485963   58398 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:22.604443   58398 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:22.758542   58398 docker.go:233] disabling docker service ...
	I0520 11:46:22.758628   58398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:22.773371   58398 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:22.786871   58398 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:22.940022   58398 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:23.076857   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:23.091980   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:23.111380   58398 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:46:23.111434   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.122532   58398 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:23.122597   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.134355   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.145484   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.156776   58398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:23.169017   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.180681   58398 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.205664   58398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:23.217267   58398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:23.228490   58398 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:23.228555   58398 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:23.243578   58398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:23.255394   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:23.387407   58398 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:23.537543   58398 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:23.537613   58398 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:23.542886   58398 start.go:562] Will wait 60s for crictl version
	I0520 11:46:23.542934   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:46:23.546740   58398 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:23.587662   58398 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:23.587746   58398 ssh_runner.go:195] Run: crio --version
	I0520 11:46:23.615735   58398 ssh_runner.go:195] Run: crio --version
	I0520 11:46:23.648727   58398 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:46:21.966776   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.466411   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.967037   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.467218   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:23.966606   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.466990   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:24.966805   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:25.966829   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:26.466968   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:22.161646   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Start
	I0520 11:46:22.161813   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring networks are active...
	I0520 11:46:22.162609   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring network default is active
	I0520 11:46:22.163020   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Ensuring network mk-default-k8s-diff-port-668445 is active
	I0520 11:46:22.163509   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Getting domain xml...
	I0520 11:46:22.164146   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Creating domain...
	I0520 11:46:23.456104   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting to get IP...
	I0520 11:46:23.457157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.457703   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.457735   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:23.457652   59511 retry.go:31] will retry after 262.047261ms: waiting for machine to come up
	I0520 11:46:23.721329   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.721756   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:23.721815   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:23.721742   59511 retry.go:31] will retry after 301.868596ms: waiting for machine to come up
	I0520 11:46:24.025477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.026042   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.026072   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:24.026006   59511 retry.go:31] will retry after 431.470276ms: waiting for machine to come up
	I0520 11:46:24.458962   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.459477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:24.459564   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:24.459450   59511 retry.go:31] will retry after 566.093902ms: waiting for machine to come up
	I0520 11:46:25.027215   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.027787   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.027812   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:25.027729   59511 retry.go:31] will retry after 471.2178ms: waiting for machine to come up
	I0520 11:46:25.500575   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.501067   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:25.501106   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:25.501010   59511 retry.go:31] will retry after 682.542082ms: waiting for machine to come up
	I0520 11:46:26.184886   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:26.185367   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:26.185400   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:26.185315   59511 retry.go:31] will retry after 973.33325ms: waiting for machine to come up
	I0520 11:46:23.650320   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetIP
	I0520 11:46:23.653452   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:23.653919   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:23.653962   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:23.654259   58398 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:23.658592   58398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:23.671310   58398 kubeadm.go:877] updating cluster {Name:embed-certs-274976 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:23.671460   58398 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:46:23.671523   58398 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:23.715873   58398 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:46:23.715947   58398 ssh_runner.go:195] Run: which lz4
	I0520 11:46:23.720362   58398 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:23.724821   58398 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:23.724852   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:46:25.254824   58398 crio.go:462] duration metric: took 1.534501958s to copy over tarball
	I0520 11:46:25.254937   58398 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:27.527739   58398 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.272764169s)
	I0520 11:46:27.527773   58398 crio.go:469] duration metric: took 2.272914971s to extract the tarball
	I0520 11:46:27.527780   58398 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:27.564738   58398 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:27.613196   58398 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:46:27.613220   58398 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:46:27.613229   58398 kubeadm.go:928] updating node { 192.168.50.167 8443 v1.30.1 crio true true} ...
	I0520 11:46:27.613321   58398 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:27.613381   58398 ssh_runner.go:195] Run: crio config
	I0520 11:46:27.669907   58398 cni.go:84] Creating CNI manager for ""
	I0520 11:46:27.669931   58398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:27.669948   58398 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:27.669969   58398 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.167 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274976 NodeName:embed-certs-274976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:46:27.670117   58398 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274976"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:27.670180   58398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:46:27.680414   58398 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:27.680473   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:27.690431   58398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0520 11:46:27.707625   58398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:27.724120   58398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0520 11:46:27.744218   58398 ssh_runner.go:195] Run: grep 192.168.50.167	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:27.748173   58398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:27.760752   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:27.891061   58398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:27.909326   58398 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976 for IP: 192.168.50.167
	I0520 11:46:27.909354   58398 certs.go:194] generating shared ca certs ...
	I0520 11:46:27.909387   58398 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:27.909574   58398 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:27.909637   58398 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:27.909650   58398 certs.go:256] generating profile certs ...
	I0520 11:46:27.909754   58398 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/client.key
	I0520 11:46:27.909843   58398 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.key.d6be4a1f
	I0520 11:46:27.909922   58398 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.key
	I0520 11:46:27.910072   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:27.910127   58398 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:27.910146   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:27.910183   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:27.910208   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:27.910229   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:27.910278   58398 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:27.911030   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:27.949250   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:26.966832   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.466585   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.967225   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.466286   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:28.966679   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.466425   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:29.966440   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.467175   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.966457   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.466596   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:27.160027   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:27.160492   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:27.160515   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:27.160435   59511 retry.go:31] will retry after 1.263563226s: waiting for machine to come up
	I0520 11:46:28.426016   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:28.426503   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:28.426555   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:28.426463   59511 retry.go:31] will retry after 1.407211588s: waiting for machine to come up
	I0520 11:46:29.835680   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:29.836101   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:29.836121   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:29.836058   59511 retry.go:31] will retry after 2.320334078s: waiting for machine to come up
	I0520 11:46:27.987645   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:28.036652   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:28.083998   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0520 11:46:28.116104   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:28.154695   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:28.181013   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/embed-certs-274976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:46:28.213382   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:28.243550   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:28.268537   58398 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:28.292856   58398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:28.310801   58398 ssh_runner.go:195] Run: openssl version
	I0520 11:46:28.316724   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:28.327461   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.332286   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.332345   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:28.339005   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:28.351192   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:28.362639   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.367566   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.367623   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:28.373275   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:28.384549   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:28.395600   58398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.400136   58398 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.400188   58398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:28.405914   58398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:28.417533   58398 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:28.422327   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:28.428746   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:28.434840   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:28.441227   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:28.447310   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:28.453450   58398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:28.462637   58398 kubeadm.go:391] StartCluster: {Name:embed-certs-274976 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:embed-certs-274976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:28.462746   58398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:28.462828   58398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:28.501759   58398 cri.go:89] found id: ""
	I0520 11:46:28.501826   58398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:28.512536   58398 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:28.512554   58398 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:28.512559   58398 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:28.512603   58398 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:28.522288   58398 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:28.523266   58398 kubeconfig.go:125] found "embed-certs-274976" server: "https://192.168.50.167:8443"
	I0520 11:46:28.525188   58398 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:28.535056   58398 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.167
	I0520 11:46:28.535092   58398 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:28.535104   58398 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:28.535157   58398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:28.573317   58398 cri.go:89] found id: ""
	I0520 11:46:28.573429   58398 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:28.593111   58398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:28.603312   58398 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:28.603345   58398 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:28.603398   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:46:28.612794   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:28.612870   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:28.622819   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:46:28.634550   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:28.634614   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:28.644568   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:46:28.654918   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:28.654980   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:28.665581   58398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:46:28.677090   58398 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:28.677167   58398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:28.687152   58398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:28.697505   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:28.833183   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:29.963435   58398 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.130211655s)
	I0520 11:46:29.963486   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.185882   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.248946   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:30.334924   58398 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:30.335020   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:30.835637   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.336028   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:31.354314   58398 api_server.go:72] duration metric: took 1.01937754s to wait for apiserver process to appear ...
	I0520 11:46:31.354343   58398 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:46:31.354365   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:31.354916   58398 api_server.go:269] stopped: https://192.168.50.167:8443/healthz: Get "https://192.168.50.167:8443/healthz": dial tcp 192.168.50.167:8443: connect: connection refused
	I0520 11:46:31.854629   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.299609   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.299653   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.299669   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.370279   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.370304   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.370317   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.378426   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:34.378448   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:34.855034   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:34.860164   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:34.860205   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:35.354583   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:35.362461   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:35.362489   58398 api_server.go:103] status: https://192.168.50.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:35.854836   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:46:35.859322   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I0520 11:46:35.867050   58398 api_server.go:141] control plane version: v1.30.1
	I0520 11:46:35.867077   58398 api_server.go:131] duration metric: took 4.512726716s to wait for apiserver health ...
	I0520 11:46:35.867088   58398 cni.go:84] Creating CNI manager for ""
	I0520 11:46:35.867096   58398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:35.869427   58398 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:46:31.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.467311   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.966392   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.466560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:33.967113   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.467393   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:34.966336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.466951   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:35.967302   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:36.467339   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:32.158195   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:32.158685   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:32.158712   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:32.158637   59511 retry.go:31] will retry after 2.733920459s: waiting for machine to come up
	I0520 11:46:34.893892   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:34.894436   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:34.894480   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:34.894374   59511 retry.go:31] will retry after 3.119057685s: waiting for machine to come up
	I0520 11:46:35.871058   58398 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:46:35.898211   58398 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:46:35.942295   58398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:46:35.954120   58398 system_pods.go:59] 8 kube-system pods found
	I0520 11:46:35.954156   58398 system_pods.go:61] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:46:35.954171   58398 system_pods.go:61] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:46:35.954181   58398 system_pods.go:61] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:46:35.954194   58398 system_pods.go:61] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:46:35.954204   58398 system_pods.go:61] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0520 11:46:35.954213   58398 system_pods.go:61] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:46:35.954220   58398 system_pods.go:61] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:46:35.954229   58398 system_pods.go:61] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0520 11:46:35.954239   58398 system_pods.go:74] duration metric: took 11.924168ms to wait for pod list to return data ...
	I0520 11:46:35.954249   58398 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:46:35.957627   58398 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:46:35.957664   58398 node_conditions.go:123] node cpu capacity is 2
	I0520 11:46:35.957679   58398 node_conditions.go:105] duration metric: took 3.421729ms to run NodePressure ...
	I0520 11:46:35.957700   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:36.235314   58398 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:46:36.241075   58398 kubeadm.go:733] kubelet initialised
	I0520 11:46:36.241101   58398 kubeadm.go:734] duration metric: took 5.759765ms waiting for restarted kubelet to initialise ...
	I0520 11:46:36.241110   58398 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:36.248030   58398 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.254230   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.254263   58398 pod_ready.go:81] duration metric: took 6.20401ms for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.254274   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.254281   58398 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.259348   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "etcd-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.259379   58398 pod_ready.go:81] duration metric: took 5.088735ms for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.259392   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "etcd-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.259402   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.266963   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.266997   58398 pod_ready.go:81] duration metric: took 7.580054ms for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.267010   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.267020   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.346321   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.346353   58398 pod_ready.go:81] duration metric: took 79.314801ms for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.346363   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.346371   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:36.746645   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-proxy-6n9d5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.746674   58398 pod_ready.go:81] duration metric: took 400.291751ms for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:36.746686   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-proxy-6n9d5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:36.746692   58398 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:37.146413   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.146443   58398 pod_ready.go:81] duration metric: took 399.744378ms for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:37.146454   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.146465   58398 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:37.545943   58398 pod_ready.go:97] node "embed-certs-274976" hosting pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.545966   58398 pod_ready.go:81] duration metric: took 399.491619ms for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:37.545974   58398 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-274976" hosting pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:37.545981   58398 pod_ready.go:38] duration metric: took 1.304861628s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:37.545997   58398 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:46:37.558076   58398 ops.go:34] apiserver oom_adj: -16
	I0520 11:46:37.558097   58398 kubeadm.go:591] duration metric: took 9.045531679s to restartPrimaryControlPlane
	I0520 11:46:37.558106   58398 kubeadm.go:393] duration metric: took 9.095477048s to StartCluster
	I0520 11:46:37.558125   58398 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:37.558199   58398 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:37.559701   58398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:37.559920   58398 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.167 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:46:37.561751   58398 out.go:177] * Verifying Kubernetes components...
	I0520 11:46:37.560038   58398 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:46:37.560143   58398 config.go:182] Loaded profile config "embed-certs-274976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:37.563096   58398 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274976"
	I0520 11:46:37.563106   58398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:37.563124   58398 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274976"
	W0520 11:46:37.563133   58398 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:46:37.563156   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.563164   58398 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274976"
	I0520 11:46:37.563179   58398 addons.go:69] Setting metrics-server=true in profile "embed-certs-274976"
	I0520 11:46:37.563193   58398 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274976"
	I0520 11:46:37.563204   58398 addons.go:234] Setting addon metrics-server=true in "embed-certs-274976"
	W0520 11:46:37.563215   58398 addons.go:243] addon metrics-server should already be in state true
	I0520 11:46:37.563241   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.563516   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563558   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.563575   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563600   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.563614   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.563638   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.577924   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0520 11:46:37.578096   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0520 11:46:37.578120   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0520 11:46:37.578322   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578480   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578527   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.578851   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.578879   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.578982   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.579003   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.579019   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.579038   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.579202   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579342   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579364   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.579385   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.579824   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.579867   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.579871   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.579895   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.582214   58398 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274976"
	W0520 11:46:37.582230   58398 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:46:37.582249   58398 host.go:66] Checking if "embed-certs-274976" exists ...
	I0520 11:46:37.582511   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.582542   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.594308   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0520 11:46:37.594672   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.595047   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.595067   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.595525   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.595551   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0520 11:46:37.595709   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.596037   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.596545   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.596568   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.596980   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.597579   58398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:37.597599   58398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:37.597802   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.599963   58398 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:46:37.598457   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I0520 11:46:37.601485   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:46:37.601505   58398 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:46:37.601527   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.601793   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.602219   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.602238   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.602502   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.602688   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.604122   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.604188   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.605753   58398 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:37.604566   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.604874   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.607190   58398 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:37.607205   58398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:46:37.607221   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.607240   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.607263   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.607414   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.607729   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.610088   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.610462   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.610483   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.610738   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.610912   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.611072   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.611212   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.613655   58398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37269
	I0520 11:46:37.613938   58398 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:37.614386   58398 main.go:141] libmachine: Using API Version  1
	I0520 11:46:37.614403   58398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:37.614653   58398 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:37.614826   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetState
	I0520 11:46:37.616151   58398 main.go:141] libmachine: (embed-certs-274976) Calling .DriverName
	I0520 11:46:37.616331   58398 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:37.616345   58398 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:46:37.616362   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHHostname
	I0520 11:46:37.619627   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.620050   58398 main.go:141] libmachine: (embed-certs-274976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:95:29", ip: ""} in network mk-embed-certs-274976: {Iface:virbr2 ExpiryTime:2024-05-20 12:39:14 +0000 UTC Type:0 Mac:52:54:00:8b:95:29 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:embed-certs-274976 Clientid:01:52:54:00:8b:95:29}
	I0520 11:46:37.620072   58398 main.go:141] libmachine: (embed-certs-274976) DBG | domain embed-certs-274976 has defined IP address 192.168.50.167 and MAC address 52:54:00:8b:95:29 in network mk-embed-certs-274976
	I0520 11:46:37.620214   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHPort
	I0520 11:46:37.620365   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHKeyPath
	I0520 11:46:37.620480   58398 main.go:141] libmachine: (embed-certs-274976) Calling .GetSSHUsername
	I0520 11:46:37.620660   58398 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/embed-certs-274976/id_rsa Username:docker}
	I0520 11:46:37.734680   58398 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:37.751098   58398 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274976" to be "Ready" ...
	I0520 11:46:37.842846   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:46:37.842867   58398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:46:37.853894   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:37.862145   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:46:37.862167   58398 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:46:37.883656   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:37.884046   58398 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:37.884061   58398 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:46:37.941086   58398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:38.187392   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.187415   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.187819   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.187841   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.187855   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.187861   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.187822   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.188095   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.188115   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.193695   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.193710   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.193926   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.193942   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.813454   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.813472   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.813794   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.813813   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.813821   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.813843   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.813894   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.814140   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.814154   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897046   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.897078   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.897406   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.897453   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.897469   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897494   58398 main.go:141] libmachine: Making call to close driver server
	I0520 11:46:38.897505   58398 main.go:141] libmachine: (embed-certs-274976) Calling .Close
	I0520 11:46:38.897755   58398 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:46:38.897759   58398 main.go:141] libmachine: (embed-certs-274976) DBG | Closing plugin on server side
	I0520 11:46:38.897772   58398 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:46:38.897784   58398 addons.go:470] Verifying addon metrics-server=true in "embed-certs-274976"
	I0520 11:46:38.899910   58398 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0520 11:46:36.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.467251   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:37.967153   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.967236   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.466627   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:39.966560   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.466358   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:40.967165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:41.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:38.014993   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:38.015405   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | unable to find current IP address of domain default-k8s-diff-port-668445 in network mk-default-k8s-diff-port-668445
	I0520 11:46:38.015458   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | I0520 11:46:38.015373   59511 retry.go:31] will retry after 4.505016985s: waiting for machine to come up
	I0520 11:46:38.901272   58398 addons.go:505] duration metric: took 1.341257876s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0520 11:46:39.754549   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:41.758717   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:43.865868   57519 start.go:364] duration metric: took 57.7314587s to acquireMachinesLock for "no-preload-814599"
	I0520 11:46:43.865920   57519 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:46:43.865931   57519 fix.go:54] fixHost starting: 
	I0520 11:46:43.866311   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:43.866347   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:43.885634   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
	I0520 11:46:43.886068   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:43.886620   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:46:43.886659   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:43.887013   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:43.887200   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:46:43.887337   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:46:43.888902   57519 fix.go:112] recreateIfNeeded on no-preload-814599: state=Stopped err=<nil>
	I0520 11:46:43.888958   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	W0520 11:46:43.889108   57519 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:46:43.891052   57519 out.go:177] * Restarting existing kvm2 VM for "no-preload-814599" ...
	I0520 11:46:42.521630   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.522114   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Found IP for machine: 192.168.61.221
	I0520 11:46:42.522140   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Reserving static IP address...
	I0520 11:46:42.522157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has current primary IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.522534   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Reserved static IP address: 192.168.61.221
	I0520 11:46:42.522556   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Waiting for SSH to be available...
	I0520 11:46:42.522571   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-668445", mac: "52:54:00:55:ba:46", ip: "192.168.61.221"} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.522593   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | skip adding static IP to network mk-default-k8s-diff-port-668445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-668445", mac: "52:54:00:55:ba:46", ip: "192.168.61.221"}
	I0520 11:46:42.522613   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Getting to WaitForSSH function...
	I0520 11:46:42.524812   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.525199   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.525221   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.525362   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Using SSH client type: external
	I0520 11:46:42.525391   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa (-rw-------)
	I0520 11:46:42.525427   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:46:42.525443   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | About to run SSH command:
	I0520 11:46:42.525455   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | exit 0
	I0520 11:46:42.648689   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | SSH cmd err, output: <nil>: 
	I0520 11:46:42.649122   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetConfigRaw
	I0520 11:46:42.649745   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:42.652220   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.652571   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.652596   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.652821   58840 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/config.json ...
	I0520 11:46:42.653026   58840 machine.go:94] provisionDockerMachine start ...
	I0520 11:46:42.653046   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:42.653235   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.655453   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.655758   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.655777   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.655923   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.656089   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.656253   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.656384   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.656551   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.656729   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.656739   58840 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:46:42.761195   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:46:42.761225   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:42.761450   58840 buildroot.go:166] provisioning hostname "default-k8s-diff-port-668445"
	I0520 11:46:42.761477   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:42.761666   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.764340   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.764670   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.764695   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.764801   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.764973   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.765131   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.765285   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.765472   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.765634   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.765647   58840 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-668445 && echo "default-k8s-diff-port-668445" | sudo tee /etc/hostname
	I0520 11:46:42.883179   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-668445
	
	I0520 11:46:42.883204   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:42.885913   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.886302   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:42.886347   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:42.886487   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:42.886699   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.886884   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:42.887040   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:42.887238   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:42.887419   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:42.887444   58840 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-668445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-668445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-668445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:46:43.001769   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:46:43.001792   58840 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:46:43.001825   58840 buildroot.go:174] setting up certificates
	I0520 11:46:43.001834   58840 provision.go:84] configureAuth start
	I0520 11:46:43.001842   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetMachineName
	I0520 11:46:43.002064   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:43.004606   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.004897   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.004944   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.005087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.006929   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.007270   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.007296   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.007386   58840 provision.go:143] copyHostCerts
	I0520 11:46:43.007461   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:46:43.007470   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:46:43.007523   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:46:43.007608   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:46:43.007617   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:46:43.007637   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:46:43.007686   58840 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:46:43.007693   58840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:46:43.007709   58840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:46:43.007753   58840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-668445 san=[127.0.0.1 192.168.61.221 default-k8s-diff-port-668445 localhost minikube]
	I0520 11:46:43.165780   58840 provision.go:177] copyRemoteCerts
	I0520 11:46:43.165835   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:46:43.165860   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.168782   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.169136   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.169181   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.169343   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.169575   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.169717   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.169892   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.252175   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:46:43.280313   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0520 11:46:43.306901   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:46:43.331191   58840 provision.go:87] duration metric: took 329.344445ms to configureAuth
	I0520 11:46:43.331227   58840 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:46:43.331478   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:43.331576   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.334173   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.334573   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.334603   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.334770   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.334961   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.335108   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.335236   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.335381   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:43.335593   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:43.335615   58840 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:46:43.630180   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:46:43.630213   58840 machine.go:97] duration metric: took 977.172725ms to provisionDockerMachine
	I0520 11:46:43.630229   58840 start.go:293] postStartSetup for "default-k8s-diff-port-668445" (driver="kvm2")
	I0520 11:46:43.630244   58840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:46:43.630270   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.630597   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:46:43.630635   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.633307   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.633650   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.633680   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.633798   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.633976   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.634141   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.634264   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.719350   58840 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:46:43.723320   58840 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:46:43.723339   58840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:46:43.723396   58840 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:46:43.723478   58840 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:46:43.723564   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:46:43.732422   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:43.757138   58840 start.go:296] duration metric: took 126.894206ms for postStartSetup
	I0520 11:46:43.757179   58840 fix.go:56] duration metric: took 21.619082215s for fixHost
	I0520 11:46:43.757202   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.759486   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.759860   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.759887   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.760065   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.760248   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.760425   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.760527   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.760654   58840 main.go:141] libmachine: Using SSH client type: native
	I0520 11:46:43.760824   58840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.221 22 <nil> <nil>}
	I0520 11:46:43.760839   58840 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:46:43.865691   58840 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205603.841593335
	
	I0520 11:46:43.865713   58840 fix.go:216] guest clock: 1716205603.841593335
	I0520 11:46:43.865722   58840 fix.go:229] Guest: 2024-05-20 11:46:43.841593335 +0000 UTC Remote: 2024-05-20 11:46:43.757183475 +0000 UTC m=+161.848941663 (delta=84.40986ms)
	I0520 11:46:43.865777   58840 fix.go:200] guest clock delta is within tolerance: 84.40986ms
	I0520 11:46:43.865790   58840 start.go:83] releasing machines lock for "default-k8s-diff-port-668445", held for 21.727726024s
	I0520 11:46:43.865824   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.866087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:43.868681   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.869054   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.869076   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.869256   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.869792   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.869961   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:43.870035   58840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:46:43.870091   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.870174   58840 ssh_runner.go:195] Run: cat /version.json
	I0520 11:46:43.870195   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:43.872817   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873059   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873146   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.873253   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873487   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.873496   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:43.873537   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:43.873684   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.873687   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:43.873861   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.873866   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:43.874033   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:43.874046   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:43.874234   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	W0520 11:46:43.973840   58840 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:46:43.973913   58840 ssh_runner.go:195] Run: systemctl --version
	I0520 11:46:43.980706   58840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:46:44.132625   58840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:46:44.140558   58840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:46:44.140629   58840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:46:44.158922   58840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:46:44.158946   58840 start.go:494] detecting cgroup driver to use...
	I0520 11:46:44.159006   58840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:46:44.179206   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:46:44.194183   58840 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:46:44.194254   58840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:46:44.208701   58840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:46:44.223086   58840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:46:44.342811   58840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:46:44.491358   58840 docker.go:233] disabling docker service ...
	I0520 11:46:44.491428   58840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:46:44.509165   58840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:46:44.526088   58840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:46:44.700452   58840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:46:44.833129   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:46:44.849090   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:46:44.870148   58840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:46:44.870222   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.880951   58840 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:46:44.881016   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.895038   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.908245   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.918844   58840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:46:44.929838   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.940446   58840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.960350   58840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:46:44.974587   58840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:46:44.985689   58840 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:46:44.985748   58840 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:46:44.999929   58840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:46:45.010732   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:45.140038   58840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:46:45.291656   58840 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:46:45.291728   58840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:46:45.297351   58840 start.go:562] Will wait 60s for crictl version
	I0520 11:46:45.297412   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:46:45.301297   58840 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:46:45.343635   58840 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:46:45.343703   58840 ssh_runner.go:195] Run: crio --version
	I0520 11:46:45.373914   58840 ssh_runner.go:195] Run: crio --version
	I0520 11:46:45.404596   58840 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:46:43.892523   57519 main.go:141] libmachine: (no-preload-814599) Calling .Start
	I0520 11:46:43.892686   57519 main.go:141] libmachine: (no-preload-814599) Ensuring networks are active...
	I0520 11:46:43.893370   57519 main.go:141] libmachine: (no-preload-814599) Ensuring network default is active
	I0520 11:46:43.893729   57519 main.go:141] libmachine: (no-preload-814599) Ensuring network mk-no-preload-814599 is active
	I0520 11:46:43.894105   57519 main.go:141] libmachine: (no-preload-814599) Getting domain xml...
	I0520 11:46:43.894847   57519 main.go:141] libmachine: (no-preload-814599) Creating domain...
	I0520 11:46:45.229992   57519 main.go:141] libmachine: (no-preload-814599) Waiting to get IP...
	I0520 11:46:45.230981   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.231451   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.231518   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.231438   59753 retry.go:31] will retry after 187.539159ms: waiting for machine to come up
	I0520 11:46:45.420854   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.421369   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.421393   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.421328   59753 retry.go:31] will retry after 327.709711ms: waiting for machine to come up
	I0520 11:46:45.750799   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:45.751384   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:45.751414   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:45.751334   59753 retry.go:31] will retry after 445.096849ms: waiting for machine to come up
	I0520 11:46:46.197899   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:46.198567   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:46.198598   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:46.198536   59753 retry.go:31] will retry after 561.029629ms: waiting for machine to come up
	I0520 11:46:41.967129   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.467212   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:42.966637   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.466403   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:43.966371   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.466743   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:44.967293   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.467205   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.966786   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:46.466728   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:45.405909   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetIP
	I0520 11:46:45.408865   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:45.409399   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:45.409429   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:45.409655   58840 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0520 11:46:45.413815   58840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:45.426264   58840 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:46:45.426366   58840 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:46:45.426405   58840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:45.468938   58840 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:46:45.469021   58840 ssh_runner.go:195] Run: which lz4
	I0520 11:46:45.474957   58840 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0520 11:46:45.481006   58840 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0520 11:46:45.481039   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0520 11:46:44.255728   58398 node_ready.go:53] node "embed-certs-274976" has status "Ready":"False"
	I0520 11:46:44.756897   58398 node_ready.go:49] node "embed-certs-274976" has status "Ready":"True"
	I0520 11:46:44.756963   58398 node_ready.go:38] duration metric: took 7.005828991s for node "embed-certs-274976" to be "Ready" ...
	I0520 11:46:44.756978   58398 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:44.767584   58398 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:44.774049   58398 pod_ready.go:92] pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:44.774078   58398 pod_ready.go:81] duration metric: took 6.468023ms for pod "coredns-7db6d8ff4d-64txr" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:44.774099   58398 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.795658   58398 pod_ready.go:92] pod "etcd-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:46.795691   58398 pod_ready.go:81] duration metric: took 2.021582833s for pod "etcd-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.795705   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.810711   58398 pod_ready.go:92] pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:46.810739   58398 pod_ready.go:81] duration metric: took 15.025215ms for pod "kube-apiserver-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.810751   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:46.760855   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:46.761534   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:46.761579   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:46.761484   59753 retry.go:31] will retry after 497.075791ms: waiting for machine to come up
	I0520 11:46:47.260059   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:47.260542   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:47.260580   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:47.260479   59753 retry.go:31] will retry after 910.980097ms: waiting for machine to come up
	I0520 11:46:48.173677   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:48.174161   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:48.174189   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:48.174108   59753 retry.go:31] will retry after 931.969789ms: waiting for machine to come up
	I0520 11:46:49.107672   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:49.108241   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:49.108271   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:49.108194   59753 retry.go:31] will retry after 1.075042352s: waiting for machine to come up
	I0520 11:46:50.184518   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:50.185078   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:50.185108   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:50.185025   59753 retry.go:31] will retry after 1.174646184s: waiting for machine to come up
	I0520 11:46:46.967320   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.466459   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.966641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.467404   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.966845   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.467146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:49.966601   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.466616   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:50.967389   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:51.466668   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:47.010637   58840 crio.go:462] duration metric: took 1.535717978s to copy over tarball
	I0520 11:46:47.010719   58840 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0520 11:46:49.308712   58840 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.297962166s)
	I0520 11:46:49.308747   58840 crio.go:469] duration metric: took 2.298080001s to extract the tarball
	I0520 11:46:49.308756   58840 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0520 11:46:49.346851   58840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:46:49.393090   58840 crio.go:514] all images are preloaded for cri-o runtime.
	I0520 11:46:49.393118   58840 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:46:49.393126   58840 kubeadm.go:928] updating node { 192.168.61.221 8444 v1.30.1 crio true true} ...
	I0520 11:46:49.393247   58840 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-668445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:46:49.393326   58840 ssh_runner.go:195] Run: crio config
	I0520 11:46:49.438553   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:46:49.438575   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:49.438592   58840 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:46:49.438618   58840 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.221 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-668445 NodeName:default-k8s-diff-port-668445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:46:49.438792   58840 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-668445"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:46:49.438876   58840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:46:49.449415   58840 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:46:49.449487   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:46:49.460951   58840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0520 11:46:49.479654   58840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:46:49.497494   58840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0520 11:46:49.518474   58840 ssh_runner.go:195] Run: grep 192.168.61.221	control-plane.minikube.internal$ /etc/hosts
	I0520 11:46:49.522598   58840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:46:49.537237   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:49.686022   58840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:49.712850   58840 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445 for IP: 192.168.61.221
	I0520 11:46:49.712871   58840 certs.go:194] generating shared ca certs ...
	I0520 11:46:49.712885   58840 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:49.713065   58840 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:46:49.713110   58840 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:46:49.713123   58840 certs.go:256] generating profile certs ...
	I0520 11:46:49.713261   58840 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/client.key
	I0520 11:46:49.713346   58840 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.key.50005da2
	I0520 11:46:49.713390   58840 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.key
	I0520 11:46:49.713548   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:46:49.713583   58840 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:46:49.713597   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:46:49.713629   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:46:49.713656   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:46:49.713686   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:46:49.713744   58840 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:46:49.714370   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:46:49.751367   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:46:49.787162   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:46:49.822660   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:46:49.863401   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0520 11:46:49.895295   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:46:49.928089   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:46:49.952687   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:46:49.982562   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:46:50.007234   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:46:50.031099   58840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:46:50.055146   58840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:46:50.073066   58840 ssh_runner.go:195] Run: openssl version
	I0520 11:46:50.080951   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:46:50.092134   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.097436   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.097496   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:46:50.103714   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:46:50.114321   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:46:50.125104   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.131225   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.131276   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:46:50.139113   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:46:50.153959   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:46:50.165441   58840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.170302   58840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.170349   58840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:46:50.176388   58840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:46:50.188372   58840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:46:50.193168   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:46:50.199298   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:46:50.208100   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:46:50.214704   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:46:50.220565   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:46:50.226463   58840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:46:50.232400   58840 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-668445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-668445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:46:50.232472   58840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:46:50.232532   58840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:50.271977   58840 cri.go:89] found id: ""
	I0520 11:46:50.272070   58840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:46:50.283676   58840 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:46:50.283699   58840 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:46:50.283706   58840 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:46:50.283753   58840 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:46:50.296808   58840 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:46:50.298406   58840 kubeconfig.go:125] found "default-k8s-diff-port-668445" server: "https://192.168.61.221:8444"
	I0520 11:46:50.300563   58840 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:46:50.310940   58840 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.221
	I0520 11:46:50.310964   58840 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:46:50.310975   58840 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:46:50.311010   58840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:46:50.361321   58840 cri.go:89] found id: ""
	I0520 11:46:50.361400   58840 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:46:50.382699   58840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:46:50.397635   58840 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:46:50.397657   58840 kubeadm.go:156] found existing configuration files:
	
	I0520 11:46:50.397707   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0520 11:46:50.407303   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:46:50.407364   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:46:50.418331   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0520 11:46:50.427889   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:46:50.427938   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:46:50.437649   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0520 11:46:50.446810   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:46:50.446861   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:46:50.456428   58840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0520 11:46:50.465444   58840 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:46:50.465512   58840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:46:50.478585   58840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:46:50.489053   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:50.607045   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.346500   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.594241   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.661179   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:51.751253   58840 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:46:51.751331   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:48.318166   58398 pod_ready.go:92] pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.318193   58398 pod_ready.go:81] duration metric: took 1.507434439s for pod "kube-controller-manager-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.318207   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.323814   58398 pod_ready.go:92] pod "kube-proxy-6n9d5" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.323850   58398 pod_ready.go:81] duration metric: took 5.632938ms for pod "kube-proxy-6n9d5" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.323865   58398 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.356208   58398 pod_ready.go:92] pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace has status "Ready":"True"
	I0520 11:46:48.356235   58398 pod_ready.go:81] duration metric: took 32.360735ms for pod "kube-scheduler-embed-certs-274976" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:48.356247   58398 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:50.363910   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:52.364530   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:51.360905   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:51.482451   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:51.482493   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:51.361293   59753 retry.go:31] will retry after 1.455979778s: waiting for machine to come up
	I0520 11:46:52.818580   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:52.819134   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:52.819162   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:52.819073   59753 retry.go:31] will retry after 2.771639261s: waiting for machine to come up
	I0520 11:46:55.593581   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:55.594138   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:55.594167   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:55.594089   59753 retry.go:31] will retry after 2.881341208s: waiting for machine to come up
	I0520 11:46:51.967150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.466994   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.967190   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.467334   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:53.967039   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.466506   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:54.966828   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.466808   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:55.967263   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:56.466949   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.252101   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.752498   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:52.794795   58840 api_server.go:72] duration metric: took 1.043540264s to wait for apiserver process to appear ...
	I0520 11:46:52.794823   58840 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:46:52.794847   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:52.795569   58840 api_server.go:269] stopped: https://192.168.61.221:8444/healthz: Get "https://192.168.61.221:8444/healthz": dial tcp 192.168.61.221:8444: connect: connection refused
	I0520 11:46:53.294972   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.015833   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:56.015872   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:56.015889   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.053674   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:46:56.053705   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:46:56.294988   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.300440   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:56.300474   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:56.795720   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:56.813192   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:56.813221   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:57.295045   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:57.298962   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:46:57.298994   58840 api_server.go:103] status: https://192.168.61.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:46:57.795590   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:46:57.800795   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 200:
	ok
	I0520 11:46:57.809060   58840 api_server.go:141] control plane version: v1.30.1
	I0520 11:46:57.809084   58840 api_server.go:131] duration metric: took 5.014253865s to wait for apiserver health ...
	I0520 11:46:57.809092   58840 cni.go:84] Creating CNI manager for ""
	I0520 11:46:57.809098   58840 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:46:57.811020   58840 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:46:54.864263   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:57.363797   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:46:57.812202   58840 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:46:57.824260   58840 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:46:57.843189   58840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:46:57.851978   58840 system_pods.go:59] 8 kube-system pods found
	I0520 11:46:57.852012   58840 system_pods.go:61] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:46:57.852029   58840 system_pods.go:61] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:46:57.852035   58840 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:46:57.852042   58840 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:46:57.852046   58840 system_pods.go:61] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:46:57.852055   58840 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:46:57.852063   58840 system_pods.go:61] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:46:57.852067   58840 system_pods.go:61] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:46:57.852073   58840 system_pods.go:74] duration metric: took 8.872046ms to wait for pod list to return data ...
	I0520 11:46:57.852084   58840 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:46:57.855020   58840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:46:57.855043   58840 node_conditions.go:123] node cpu capacity is 2
	I0520 11:46:57.855052   58840 node_conditions.go:105] duration metric: took 2.95984ms to run NodePressure ...
	I0520 11:46:57.855065   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:46:58.124410   58840 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:46:58.129201   58840 kubeadm.go:733] kubelet initialised
	I0520 11:46:58.129222   58840 kubeadm.go:734] duration metric: took 4.789292ms waiting for restarted kubelet to initialise ...
	I0520 11:46:58.129229   58840 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:58.134883   58840 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.139522   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.139557   58840 pod_ready.go:81] duration metric: took 4.638772ms for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.139569   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.139583   58840 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.143636   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.143657   58840 pod_ready.go:81] duration metric: took 4.061011ms for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.143669   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.143677   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.147479   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.147499   58840 pod_ready.go:81] duration metric: took 3.813494ms for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.147510   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.147517   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.247677   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.247710   58840 pod_ready.go:81] duration metric: took 100.183226ms for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.247723   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.247730   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:58.648289   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-proxy-7ddtk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.648339   58840 pod_ready.go:81] duration metric: took 400.599602ms for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:58.648352   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-proxy-7ddtk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:58.648365   58840 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:59.047071   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.047099   58840 pod_ready.go:81] duration metric: took 398.721478ms for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:59.047108   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.047114   58840 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	I0520 11:46:59.447840   58840 pod_ready.go:97] node "default-k8s-diff-port-668445" hosting pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.447866   58840 pod_ready.go:81] duration metric: took 400.738125ms for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	E0520 11:46:59.447876   58840 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-668445" hosting pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.447885   58840 pod_ready.go:38] duration metric: took 1.318647527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:46:59.447899   58840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:46:59.459985   58840 ops.go:34] apiserver oom_adj: -16
	I0520 11:46:59.460011   58840 kubeadm.go:591] duration metric: took 9.176298972s to restartPrimaryControlPlane
	I0520 11:46:59.460023   58840 kubeadm.go:393] duration metric: took 9.227629245s to StartCluster
	I0520 11:46:59.460038   58840 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:59.460120   58840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:46:59.461897   58840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:46:59.462172   58840 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.221 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:46:59.464109   58840 out.go:177] * Verifying Kubernetes components...
	I0520 11:46:59.462263   58840 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:46:59.462375   58840 config.go:182] Loaded profile config "default-k8s-diff-port-668445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:46:59.465582   58840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:46:59.464212   58840 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465645   58840 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.465657   58840 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:46:59.465693   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.464218   58840 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465752   58840 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.465768   58840 addons.go:243] addon metrics-server should already be in state true
	I0520 11:46:59.465794   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.464219   58840 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-668445"
	I0520 11:46:59.465877   58840 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-668445"
	I0520 11:46:59.466087   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466116   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.466210   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466216   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.466232   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.466235   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.481963   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0520 11:46:59.482030   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45203
	I0520 11:46:59.482551   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.482768   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.483115   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.483140   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.483495   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.483516   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.483759   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.483930   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.483979   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.484484   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.484509   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.485272   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0520 11:46:59.485657   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.486686   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.486716   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.487080   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.487552   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.487588   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.487989   58840 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-668445"
	W0520 11:46:59.488007   58840 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:46:59.488037   58840 host.go:66] Checking if "default-k8s-diff-port-668445" exists ...
	I0520 11:46:59.488349   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.488384   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.500796   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0520 11:46:59.501376   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.501793   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0520 11:46:59.502058   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.502098   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.502308   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.502439   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.502661   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.502737   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0520 11:46:59.502976   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.503009   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.503199   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.503329   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.503548   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.503648   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.503659   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.504278   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.504632   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.504814   58840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:46:59.506755   58840 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:46:59.504875   58840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:46:59.505278   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.508201   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:46:59.508219   58840 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:46:59.508238   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.509465   58840 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:46:59.510750   58840 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:59.510765   58840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:46:59.510778   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.511023   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.511444   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.511480   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.511707   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.511939   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.512117   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.512271   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.513612   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.513980   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.514009   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.514207   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.514354   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.514460   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.514564   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.523003   58840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I0520 11:46:59.523361   58840 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:46:59.523835   58840 main.go:141] libmachine: Using API Version  1
	I0520 11:46:59.523858   58840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:46:59.524115   58840 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:46:59.524266   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetState
	I0520 11:46:59.525489   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .DriverName
	I0520 11:46:59.525661   58840 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:59.525673   58840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:46:59.525685   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHHostname
	I0520 11:46:59.527762   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.528064   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ba:46", ip: ""} in network mk-default-k8s-diff-port-668445: {Iface:virbr3 ExpiryTime:2024-05-20 12:39:56 +0000 UTC Type:0 Mac:52:54:00:55:ba:46 Iaid: IPaddr:192.168.61.221 Prefix:24 Hostname:default-k8s-diff-port-668445 Clientid:01:52:54:00:55:ba:46}
	I0520 11:46:59.528087   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | domain default-k8s-diff-port-668445 has defined IP address 192.168.61.221 and MAC address 52:54:00:55:ba:46 in network mk-default-k8s-diff-port-668445
	I0520 11:46:59.528157   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHPort
	I0520 11:46:59.528309   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHKeyPath
	I0520 11:46:59.528414   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .GetSSHUsername
	I0520 11:46:59.528559   58840 sshutil.go:53] new ssh client: &{IP:192.168.61.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/default-k8s-diff-port-668445/id_rsa Username:docker}
	I0520 11:46:59.652795   58840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:46:59.672785   58840 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-668445" to be "Ready" ...
	I0520 11:46:59.759683   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:46:59.760767   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:46:59.775512   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:46:59.775536   58840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:46:59.807608   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:46:59.807633   58840 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:46:59.835597   58840 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:46:59.835624   58840 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:46:59.862717   58840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:47:00.145002   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.145032   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.145352   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.145352   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.145372   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.145380   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.145389   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.145645   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.145700   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.145717   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.150992   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.151008   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.151358   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.151376   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.151385   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.802987   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803016   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803030   58840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.042235492s)
	I0520 11:47:00.803063   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803081   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803354   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.803386   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803404   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803422   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803424   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803434   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803439   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803449   58840 main.go:141] libmachine: Making call to close driver server
	I0520 11:47:00.803464   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) Calling .Close
	I0520 11:47:00.803621   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803631   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.803642   58840 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-668445"
	I0520 11:47:00.803677   58840 main.go:141] libmachine: (default-k8s-diff-port-668445) DBG | Closing plugin on server side
	I0520 11:47:00.803729   58840 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:47:00.803743   58840 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:47:00.805628   58840 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0520 11:46:58.478246   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:46:58.478733   57519 main.go:141] libmachine: (no-preload-814599) DBG | unable to find current IP address of domain no-preload-814599 in network mk-no-preload-814599
	I0520 11:46:58.478765   57519 main.go:141] libmachine: (no-preload-814599) DBG | I0520 11:46:58.478670   59753 retry.go:31] will retry after 3.220647355s: waiting for machine to come up
	I0520 11:46:56.966878   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.466455   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:57.966485   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.466861   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:58.967023   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.467329   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:46:59.966889   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.466799   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.966521   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:01.466922   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:00.806818   58840 addons.go:505] duration metric: took 1.344560138s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0520 11:47:01.675861   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:46:59.863455   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:01.863859   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:01.702053   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.702503   57519 main.go:141] libmachine: (no-preload-814599) Found IP for machine: 192.168.39.185
	I0520 11:47:01.702519   57519 main.go:141] libmachine: (no-preload-814599) Reserving static IP address...
	I0520 11:47:01.702532   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has current primary IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.702964   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.702994   57519 main.go:141] libmachine: (no-preload-814599) Reserved static IP address: 192.168.39.185
	I0520 11:47:01.703017   57519 main.go:141] libmachine: (no-preload-814599) DBG | skip adding static IP to network mk-no-preload-814599 - found existing host DHCP lease matching {name: "no-preload-814599", mac: "52:54:00:16:c8:fb", ip: "192.168.39.185"}
	I0520 11:47:01.703036   57519 main.go:141] libmachine: (no-preload-814599) DBG | Getting to WaitForSSH function...
	I0520 11:47:01.703048   57519 main.go:141] libmachine: (no-preload-814599) Waiting for SSH to be available...
	I0520 11:47:01.705097   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.705376   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.705409   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.705482   57519 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH client type: external
	I0520 11:47:01.705511   57519 main.go:141] libmachine: (no-preload-814599) DBG | Using SSH private key: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa (-rw-------)
	I0520 11:47:01.705543   57519 main.go:141] libmachine: (no-preload-814599) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.185 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0520 11:47:01.705558   57519 main.go:141] libmachine: (no-preload-814599) DBG | About to run SSH command:
	I0520 11:47:01.705593   57519 main.go:141] libmachine: (no-preload-814599) DBG | exit 0
	I0520 11:47:01.833182   57519 main.go:141] libmachine: (no-preload-814599) DBG | SSH cmd err, output: <nil>: 
	I0520 11:47:01.833551   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetConfigRaw
	I0520 11:47:01.834149   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:01.836532   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.837008   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.837047   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.837310   57519 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/config.json ...
	I0520 11:47:01.837509   57519 machine.go:94] provisionDockerMachine start ...
	I0520 11:47:01.837527   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:01.837717   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:01.839814   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.840151   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.840171   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.840335   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:01.840485   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.840631   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.840781   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:01.840958   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:01.841138   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:01.841149   57519 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:47:01.953190   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0520 11:47:01.953221   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:01.953480   57519 buildroot.go:166] provisioning hostname "no-preload-814599"
	I0520 11:47:01.953504   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:01.953667   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:01.956155   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.956487   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:01.956518   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:01.956684   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:01.956884   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.957071   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:01.957239   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:01.957416   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:01.957580   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:01.957593   57519 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-814599 && echo "no-preload-814599" | sudo tee /etc/hostname
	I0520 11:47:02.079993   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-814599
	
	I0520 11:47:02.080018   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.082623   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.082950   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.082984   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.083154   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.083353   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.083520   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.083677   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.083852   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:02.084014   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:02.084029   57519 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-814599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-814599/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-814599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:47:02.204190   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:47:02.204225   57519 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18925-3819/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-3819/.minikube}
	I0520 11:47:02.204252   57519 buildroot.go:174] setting up certificates
	I0520 11:47:02.204264   57519 provision.go:84] configureAuth start
	I0520 11:47:02.204280   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetMachineName
	I0520 11:47:02.204562   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:02.207132   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.207477   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.207501   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.207633   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.209905   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.210212   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.210231   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.210414   57519 provision.go:143] copyHostCerts
	I0520 11:47:02.210472   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem, removing ...
	I0520 11:47:02.210493   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem
	I0520 11:47:02.210560   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/key.pem (1679 bytes)
	I0520 11:47:02.210743   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem, removing ...
	I0520 11:47:02.210758   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem
	I0520 11:47:02.210795   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/ca.pem (1078 bytes)
	I0520 11:47:02.210875   57519 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem, removing ...
	I0520 11:47:02.210883   57519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem
	I0520 11:47:02.210905   57519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-3819/.minikube/cert.pem (1123 bytes)
	I0520 11:47:02.210962   57519 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem org=jenkins.no-preload-814599 san=[127.0.0.1 192.168.39.185 localhost minikube no-preload-814599]
	I0520 11:47:02.411383   57519 provision.go:177] copyRemoteCerts
	I0520 11:47:02.411436   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:47:02.411458   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.414071   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.414443   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.414473   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.414675   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.414859   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.415006   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.415125   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:02.507148   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:47:02.536944   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0520 11:47:02.562291   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:47:02.589034   57519 provision.go:87] duration metric: took 384.752486ms to configureAuth
	I0520 11:47:02.589092   57519 buildroot.go:189] setting minikube options for container-runtime
	I0520 11:47:02.589320   57519 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:47:02.589412   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.592287   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.592704   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.592731   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.592957   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.593180   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.593368   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.593561   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.593724   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:02.593910   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:02.593931   57519 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0520 11:47:02.899943   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0520 11:47:02.900037   57519 machine.go:97] duration metric: took 1.062511085s to provisionDockerMachine
	I0520 11:47:02.900074   57519 start.go:293] postStartSetup for "no-preload-814599" (driver="kvm2")
	I0520 11:47:02.900112   57519 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:47:02.900155   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:02.900546   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:47:02.900581   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:02.903643   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.904057   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:02.904089   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:02.904253   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:02.904433   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:02.904580   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:02.904710   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:02.999752   57519 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:47:03.005832   57519 info.go:137] Remote host: Buildroot 2023.02.9
	I0520 11:47:03.005856   57519 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/addons for local assets ...
	I0520 11:47:03.005940   57519 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-3819/.minikube/files for local assets ...
	I0520 11:47:03.006029   57519 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem -> 111622.pem in /etc/ssl/certs
	I0520 11:47:03.006145   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:47:03.016853   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:47:03.049845   57519 start.go:296] duration metric: took 149.725ms for postStartSetup
	I0520 11:47:03.049886   57519 fix.go:56] duration metric: took 19.183954619s for fixHost
	I0520 11:47:03.049912   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.052841   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.053175   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.053203   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.053342   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.053537   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.053730   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.053899   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.054048   57519 main.go:141] libmachine: Using SSH client type: native
	I0520 11:47:03.054227   57519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0520 11:47:03.054241   57519 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0520 11:47:03.166461   57519 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716205623.121199957
	
	I0520 11:47:03.166490   57519 fix.go:216] guest clock: 1716205623.121199957
	I0520 11:47:03.166500   57519 fix.go:229] Guest: 2024-05-20 11:47:03.121199957 +0000 UTC Remote: 2024-05-20 11:47:03.049891315 +0000 UTC m=+366.742654246 (delta=71.308642ms)
	I0520 11:47:03.166523   57519 fix.go:200] guest clock delta is within tolerance: 71.308642ms
	I0520 11:47:03.166530   57519 start.go:83] releasing machines lock for "no-preload-814599", held for 19.30063132s
	I0520 11:47:03.166554   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.166858   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:03.169486   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.169816   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.169861   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.169990   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170546   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170728   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:47:03.170814   57519 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:47:03.170875   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.170920   57519 ssh_runner.go:195] Run: cat /version.json
	I0520 11:47:03.170944   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:47:03.173717   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.173944   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174128   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.174153   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174338   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:03.174365   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:03.174371   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.174570   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:47:03.174571   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.174781   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:47:03.174793   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.174956   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:47:03.175049   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:47:03.175169   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	W0520 11:47:03.276456   57519 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.33.0 -> Actual minikube version: v1.33.1
	I0520 11:47:03.276537   57519 ssh_runner.go:195] Run: systemctl --version
	I0520 11:47:03.282946   57519 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0520 11:47:03.428948   57519 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0520 11:47:03.436462   57519 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0520 11:47:03.436529   57519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:47:03.453022   57519 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:47:03.453045   57519 start.go:494] detecting cgroup driver to use...
	I0520 11:47:03.453103   57519 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0520 11:47:03.469505   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0520 11:47:03.483617   57519 docker.go:217] disabling cri-docker service (if available) ...
	I0520 11:47:03.483673   57519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0520 11:47:03.498514   57519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0520 11:47:03.513169   57519 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0520 11:47:03.638355   57519 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0520 11:47:03.819407   57519 docker.go:233] disabling docker service ...
	I0520 11:47:03.819473   57519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0520 11:47:03.837928   57519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0520 11:47:03.853747   57519 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0520 11:47:03.976480   57519 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0520 11:47:04.097685   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0520 11:47:04.112917   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:47:04.132979   57519 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0520 11:47:04.133044   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.145501   57519 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0520 11:47:04.145594   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.157957   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.168774   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.180521   57519 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:47:04.191739   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.203232   57519 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.221386   57519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0520 11:47:04.231999   57519 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:47:04.243401   57519 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0520 11:47:04.243447   57519 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0520 11:47:04.259232   57519 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:47:04.270945   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:47:04.387908   57519 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0520 11:47:04.526507   57519 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0520 11:47:04.526588   57519 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0520 11:47:04.531568   57519 start.go:562] Will wait 60s for crictl version
	I0520 11:47:04.531628   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:04.535751   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:47:04.583384   57519 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0520 11:47:04.583470   57519 ssh_runner.go:195] Run: crio --version
	I0520 11:47:04.617545   57519 ssh_runner.go:195] Run: crio --version
	I0520 11:47:04.652685   57519 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0520 11:47:04.654199   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetIP
	I0520 11:47:04.657274   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:04.657624   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:47:04.657653   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:47:04.657858   57519 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0520 11:47:04.662344   57519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:47:04.676310   57519 kubeadm.go:877] updating cluster {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:47:04.676437   57519 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 11:47:04.676480   57519 ssh_runner.go:195] Run: sudo crictl images --output json
	I0520 11:47:04.723427   57519 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0520 11:47:04.723452   57519 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0520 11:47:04.723502   57519 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:04.723705   57519 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:04.723729   57519 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:04.723773   57519 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:04.723829   57519 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:04.723896   57519 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:04.723913   57519 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0520 11:47:04.723767   57519 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0520 11:47:04.725657   57519 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:04.725657   57519 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:04.725603   57519 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:04.725737   57519 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:04.725630   57519 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:04.725696   57519 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.849687   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.896118   57519 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c" in container runtime
	I0520 11:47:04.896166   57519 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.896214   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:04.901457   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0520 11:47:04.940481   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0520 11:47:04.940581   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.946097   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.1 (exists)
	I0520 11:47:04.946120   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.946168   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0520 11:47:04.958539   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:05.058882   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:05.059155   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:05.061672   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0520 11:47:05.082727   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:05.087840   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:05.606193   57519 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:01.966435   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.467352   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:02.967335   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.467049   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.966477   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.466715   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:04.966960   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.466641   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:05.966993   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:06.466684   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:03.677231   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:47:06.176977   58840 node_ready.go:53] node "default-k8s-diff-port-668445" has status "Ready":"False"
	I0520 11:47:06.677184   58840 node_ready.go:49] node "default-k8s-diff-port-668445" has status "Ready":"True"
	I0520 11:47:06.677208   58840 node_ready.go:38] duration metric: took 7.004390787s for node "default-k8s-diff-port-668445" to be "Ready" ...
	I0520 11:47:06.677217   58840 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:47:06.684890   58840 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.690323   58840 pod_ready.go:92] pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.690345   58840 pod_ready.go:81] duration metric: took 5.420403ms for pod "coredns-7db6d8ff4d-x92p9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.690354   58840 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.695944   58840 pod_ready.go:92] pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.695968   58840 pod_ready.go:81] duration metric: took 5.606311ms for pod "etcd-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.695979   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.701932   58840 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:06.701953   58840 pod_ready.go:81] duration metric: took 5.965003ms for pod "kube-apiserver-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:06.701966   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:04.363354   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:06.363448   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:08.125249   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1: (3.166660678s)
	I0520 11:47:08.125272   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.1: (3.179080546s)
	I0520 11:47:08.125294   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0520 11:47:08.125314   57519 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd" in container runtime
	I0520 11:47:08.125343   57519 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:08.125346   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1: (3.066428315s)
	I0520 11:47:08.125383   57519 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a" in container runtime
	I0520 11:47:08.125398   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125415   57519 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:08.125415   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (3.066238572s)
	I0520 11:47:08.125472   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125517   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9: (3.063820229s)
	I0520 11:47:08.125478   57519 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0520 11:47:08.125582   57519 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:08.125596   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1: (3.04284016s)
	I0520 11:47:08.125612   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125637   57519 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0520 11:47:08.125659   57519 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:08.125668   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1: (3.037804525s)
	I0520 11:47:08.125696   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125707   57519 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035" in container runtime
	I0520 11:47:08.125709   57519 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.519483497s)
	I0520 11:47:08.125727   57519 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:08.125758   57519 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0520 11:47:08.125775   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.125808   57519 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:08.125843   57519 ssh_runner.go:195] Run: which crictl
	I0520 11:47:08.140377   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0520 11:47:08.140435   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.1
	I0520 11:47:08.140472   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0520 11:47:08.140518   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0520 11:47:08.140550   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0520 11:47:08.140604   57519 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:47:08.277671   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0520 11:47:08.277785   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.286159   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0520 11:47:08.286163   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0520 11:47:08.286261   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:08.286297   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:08.287554   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0520 11:47:08.287623   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:08.300162   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0520 11:47:08.300183   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.1 (exists)
	I0520 11:47:08.300197   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.300203   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0520 11:47:08.300221   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.1 (exists)
	I0520 11:47:08.300235   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0520 11:47:08.300168   57519 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0520 11:47:08.300245   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0520 11:47:08.300254   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:08.300315   57519 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:08.304669   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.1 (exists)
	I0520 11:47:10.580290   57519 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.279947927s)
	I0520 11:47:10.580392   57519 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0520 11:47:10.580421   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.1: (2.280153203s)
	I0520 11:47:10.580444   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0520 11:47:10.580466   57519 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:10.580514   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0520 11:47:06.966995   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.467261   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:07.967171   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.467104   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:08.967112   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.466446   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:09.967351   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:10.467225   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:10.467292   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:10.507451   58316 cri.go:89] found id: ""
	I0520 11:47:10.507477   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.507487   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:10.507494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:10.507557   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:10.548291   58316 cri.go:89] found id: ""
	I0520 11:47:10.548317   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.548329   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:10.548337   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:10.548401   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:10.592870   58316 cri.go:89] found id: ""
	I0520 11:47:10.592899   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.592909   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:10.592916   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:10.592997   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:10.637344   58316 cri.go:89] found id: ""
	I0520 11:47:10.637376   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.637387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:10.637395   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:10.637466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:10.679759   58316 cri.go:89] found id: ""
	I0520 11:47:10.679794   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.679802   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:10.679807   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:10.679868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:10.720179   58316 cri.go:89] found id: ""
	I0520 11:47:10.720215   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.720250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:10.720259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:10.720324   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:10.761279   58316 cri.go:89] found id: ""
	I0520 11:47:10.761308   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.761320   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:10.761327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:10.761383   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:10.799008   58316 cri.go:89] found id: ""
	I0520 11:47:10.799037   58316 logs.go:276] 0 containers: []
	W0520 11:47:10.799048   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:10.799059   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:10.799072   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:10.850507   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:10.850537   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:10.867287   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:10.867317   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:11.007373   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:11.007401   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:11.007420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:11.090037   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:11.090074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:07.717364   58840 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:07.717391   58840 pod_ready.go:81] duration metric: took 1.01541621s for pod "kube-controller-manager-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:07.717404   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:08.041632   58840 pod_ready.go:92] pod "kube-proxy-7ddtk" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:08.041657   58840 pod_ready.go:81] duration metric: took 324.24633ms for pod "kube-proxy-7ddtk" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:08.041669   58840 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:10.047877   58840 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:08.364366   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:10.366662   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:12.895005   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:11.650206   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.069627914s)
	I0520 11:47:11.650241   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0520 11:47:11.650265   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:11.650322   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0520 11:47:13.011174   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.1: (1.360821671s)
	I0520 11:47:13.011203   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0520 11:47:13.011225   57519 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:13.011270   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0520 11:47:13.633150   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:13.650425   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:13.650494   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:13.690842   58316 cri.go:89] found id: ""
	I0520 11:47:13.690869   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.690880   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:13.690887   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:13.690947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:13.727462   58316 cri.go:89] found id: ""
	I0520 11:47:13.727492   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.727504   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:13.727511   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:13.727570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:13.765767   58316 cri.go:89] found id: ""
	I0520 11:47:13.765797   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.765807   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:13.765816   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:13.765877   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:13.800210   58316 cri.go:89] found id: ""
	I0520 11:47:13.800239   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.800248   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:13.800255   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:13.800316   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:13.836481   58316 cri.go:89] found id: ""
	I0520 11:47:13.836513   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.836525   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:13.836533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:13.836601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:13.881888   58316 cri.go:89] found id: ""
	I0520 11:47:13.881914   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.881923   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:13.881932   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:13.881990   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:13.916896   58316 cri.go:89] found id: ""
	I0520 11:47:13.916955   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.916966   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:13.916973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:13.917037   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:13.950345   58316 cri.go:89] found id: ""
	I0520 11:47:13.950372   58316 logs.go:276] 0 containers: []
	W0520 11:47:13.950381   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:13.950391   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:13.950415   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:14.039076   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:14.039114   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:14.086767   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:14.086794   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:14.140377   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:14.140418   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:14.158036   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:14.158074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:14.255195   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:12.048869   58840 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:12.048894   58840 pod_ready.go:81] duration metric: took 4.007218109s for pod "kube-scheduler-default-k8s-diff-port-668445" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:12.048903   58840 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:14.057711   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:16.555471   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:15.365437   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:17.864290   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:16.885885   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.874589428s)
	I0520 11:47:16.885917   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0520 11:47:16.885952   57519 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:16.886034   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1
	I0520 11:47:18.674100   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.1: (1.788037677s)
	I0520 11:47:18.674128   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0520 11:47:18.674149   57519 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:18.674197   57519 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0520 11:47:20.543671   57519 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.86944462s)
	I0520 11:47:20.543705   57519 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18925-3819/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0520 11:47:20.543733   57519 cache_images.go:123] Successfully loaded all cached images
	I0520 11:47:20.543739   57519 cache_images.go:92] duration metric: took 15.820273224s to LoadCachedImages
	I0520 11:47:20.543749   57519 kubeadm.go:928] updating node { 192.168.39.185 8443 v1.30.1 crio true true} ...
	I0520 11:47:20.543863   57519 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-814599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:47:20.543944   57519 ssh_runner.go:195] Run: crio config
	I0520 11:47:20.593647   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:47:20.593672   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:47:20.593692   57519 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:47:20.593719   57519 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-814599 NodeName:no-preload-814599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 11:47:20.593863   57519 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-814599"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:47:20.593936   57519 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 11:47:20.604739   57519 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:47:20.604791   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:47:20.614029   57519 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0520 11:47:20.630587   57519 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:47:20.647296   57519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0520 11:47:20.664453   57519 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0520 11:47:20.668296   57519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.185	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:47:20.681086   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:47:20.816542   57519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:47:20.834344   57519 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599 for IP: 192.168.39.185
	I0520 11:47:20.834369   57519 certs.go:194] generating shared ca certs ...
	I0520 11:47:20.834388   57519 certs.go:226] acquiring lock for ca certs: {Name:mk10f8c4e53cbda78dab56f1515712938835df03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:47:20.834578   57519 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key
	I0520 11:47:20.834634   57519 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key
	I0520 11:47:20.834649   57519 certs.go:256] generating profile certs ...
	I0520 11:47:20.834777   57519 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.key
	I0520 11:47:20.834876   57519 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key.d1d23eb5
	I0520 11:47:20.834928   57519 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key
	I0520 11:47:20.835090   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem (1338 bytes)
	W0520 11:47:20.835146   57519 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162_empty.pem, impossibly tiny 0 bytes
	I0520 11:47:20.835159   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca-key.pem (1675 bytes)
	I0520 11:47:20.835192   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:47:20.835226   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:47:20.835255   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/certs/key.pem (1679 bytes)
	I0520 11:47:20.835314   57519 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem (1708 bytes)
	I0520 11:47:20.836325   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:47:20.883227   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:47:20.918566   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:47:20.959097   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0520 11:47:21.006422   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0520 11:47:21.035717   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 11:47:21.071087   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:47:21.094330   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:47:21.119597   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/ssl/certs/111622.pem --> /usr/share/ca-certificates/111622.pem (1708 bytes)
	I0520 11:47:21.145488   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:47:21.173152   57519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-3819/.minikube/certs/11162.pem --> /usr/share/ca-certificates/11162.pem (1338 bytes)
	I0520 11:47:21.197627   57519 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:47:21.214595   57519 ssh_runner.go:195] Run: openssl version
	I0520 11:47:21.220678   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11162.pem && ln -fs /usr/share/ca-certificates/11162.pem /etc/ssl/certs/11162.pem"
	I0520 11:47:21.232251   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.236953   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:34 /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.237015   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11162.pem
	I0520 11:47:21.243048   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11162.pem /etc/ssl/certs/51391683.0"
	I0520 11:47:21.253667   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111622.pem && ln -fs /usr/share/ca-certificates/111622.pem /etc/ssl/certs/111622.pem"
	I0520 11:47:21.264223   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.268607   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:34 /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.268648   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111622.pem
	I0520 11:47:21.274112   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111622.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:47:21.284427   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:47:21.294811   57519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.299403   57519 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.299455   57519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:47:21.304954   57519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:47:21.315320   57519 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:47:21.319956   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:47:21.325692   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:47:21.331232   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:47:21.337240   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:47:16.755906   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:16.772659   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:16.772736   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:16.821927   58316 cri.go:89] found id: ""
	I0520 11:47:16.821954   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.821964   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:16.821971   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:16.822033   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:16.863636   58316 cri.go:89] found id: ""
	I0520 11:47:16.863663   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.863673   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:16.863681   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:16.863749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:16.900524   58316 cri.go:89] found id: ""
	I0520 11:47:16.900553   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.900561   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:16.900566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:16.900625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:16.949484   58316 cri.go:89] found id: ""
	I0520 11:47:16.949517   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.949527   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:16.949535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:16.949597   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:16.994006   58316 cri.go:89] found id: ""
	I0520 11:47:16.994035   58316 logs.go:276] 0 containers: []
	W0520 11:47:16.994045   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:16.994054   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:16.994111   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:17.030100   58316 cri.go:89] found id: ""
	I0520 11:47:17.030130   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.030141   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:17.030147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:17.030195   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:17.069067   58316 cri.go:89] found id: ""
	I0520 11:47:17.069089   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.069097   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:17.069102   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:17.069146   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:17.105102   58316 cri.go:89] found id: ""
	I0520 11:47:17.105131   58316 logs.go:276] 0 containers: []
	W0520 11:47:17.105141   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:17.105151   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:17.105165   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:17.159912   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:17.159937   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:17.212192   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:17.212227   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:17.227761   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:17.227802   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:17.305724   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:17.305753   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:17.305770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:19.886189   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:19.900397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:19.900475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:19.940473   58316 cri.go:89] found id: ""
	I0520 11:47:19.940502   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.940509   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:19.940533   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:19.940582   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:19.985561   58316 cri.go:89] found id: ""
	I0520 11:47:19.985594   58316 logs.go:276] 0 containers: []
	W0520 11:47:19.985605   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:19.985611   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:19.985676   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:20.040865   58316 cri.go:89] found id: ""
	I0520 11:47:20.040895   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.040906   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:20.040913   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:20.040991   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:20.086122   58316 cri.go:89] found id: ""
	I0520 11:47:20.086149   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.086159   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:20.086167   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:20.086228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:20.122905   58316 cri.go:89] found id: ""
	I0520 11:47:20.122931   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.122938   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:20.122943   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:20.123055   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:20.165966   58316 cri.go:89] found id: ""
	I0520 11:47:20.165993   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.166004   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:20.166012   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:20.166064   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:20.206010   58316 cri.go:89] found id: ""
	I0520 11:47:20.206043   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.206054   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:20.206062   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:20.206126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:20.243540   58316 cri.go:89] found id: ""
	I0520 11:47:20.243565   58316 logs.go:276] 0 containers: []
	W0520 11:47:20.243573   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:20.243582   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:20.243595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:20.327278   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:20.327298   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:20.327313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:20.423064   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:20.423096   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:20.467707   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:20.467742   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:20.522746   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:20.522782   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:18.556589   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:20.557476   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:20.363536   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:22.862952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:21.342881   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:47:21.349172   57519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:47:21.354721   57519 kubeadm.go:391] StartCluster: {Name:no-preload-814599 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:no-preload-814599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:47:21.354839   57519 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0520 11:47:21.354923   57519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:47:21.399473   57519 cri.go:89] found id: ""
	I0520 11:47:21.399554   57519 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:47:21.413632   57519 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:47:21.413656   57519 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:47:21.413662   57519 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:47:21.413712   57519 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:47:21.425556   57519 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:47:21.426515   57519 kubeconfig.go:125] found "no-preload-814599" server: "https://192.168.39.185:8443"
	I0520 11:47:21.428556   57519 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:47:21.438628   57519 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.185
	I0520 11:47:21.438655   57519 kubeadm.go:1154] stopping kube-system containers ...
	I0520 11:47:21.438668   57519 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0520 11:47:21.438708   57519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0520 11:47:21.485080   57519 cri.go:89] found id: ""
	I0520 11:47:21.485149   57519 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0520 11:47:21.502591   57519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:47:21.512148   57519 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:47:21.512168   57519 kubeadm.go:156] found existing configuration files:
	
	I0520 11:47:21.512208   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:47:21.521378   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:47:21.521434   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:47:21.531568   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:47:21.541404   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:47:21.541459   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:47:21.552018   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:47:21.562667   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:47:21.562711   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:47:21.572240   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:47:21.581472   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:47:21.581535   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:47:21.591072   57519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:47:21.601114   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:21.716435   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.597273   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.792224   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.866757   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:22.999294   57519 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:47:22.999389   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.499642   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:24.000424   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:24.022646   57519 api_server.go:72] duration metric: took 1.023349447s to wait for apiserver process to appear ...
	I0520 11:47:24.022673   57519 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:47:24.022691   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:24.023162   57519 api_server.go:269] stopped: https://192.168.39.185:8443/healthz: Get "https://192.168.39.185:8443/healthz": dial tcp 192.168.39.185:8443: connect: connection refused
	I0520 11:47:24.523760   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:23.037165   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:23.051438   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:23.051490   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:23.088156   58316 cri.go:89] found id: ""
	I0520 11:47:23.088186   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.088197   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:23.088204   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:23.088266   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:23.123357   58316 cri.go:89] found id: ""
	I0520 11:47:23.123384   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.123394   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:23.123401   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:23.123461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:23.159579   58316 cri.go:89] found id: ""
	I0520 11:47:23.159612   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.159622   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:23.159630   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:23.159694   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:23.195471   58316 cri.go:89] found id: ""
	I0520 11:47:23.195507   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.195518   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:23.195525   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:23.195584   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:23.231121   58316 cri.go:89] found id: ""
	I0520 11:47:23.231148   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.231159   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:23.231165   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:23.231225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:23.267809   58316 cri.go:89] found id: ""
	I0520 11:47:23.267833   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.267841   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:23.267847   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:23.267897   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:23.306589   58316 cri.go:89] found id: ""
	I0520 11:47:23.306608   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.306615   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:23.306621   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:23.306678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:23.339285   58316 cri.go:89] found id: ""
	I0520 11:47:23.339308   58316 logs.go:276] 0 containers: []
	W0520 11:47:23.339317   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:23.339329   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:23.339343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:23.406157   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:23.406196   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:23.420874   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:23.420911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:23.503049   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:23.503077   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:23.503092   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:23.585189   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:23.585222   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:26.130594   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:26.146278   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:26.146366   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:26.189721   58316 cri.go:89] found id: ""
	I0520 11:47:26.189747   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.189754   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:26.189760   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:26.189809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:26.228060   58316 cri.go:89] found id: ""
	I0520 11:47:26.228084   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.228098   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:26.228104   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:26.228171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:26.262360   58316 cri.go:89] found id: ""
	I0520 11:47:26.262390   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.262400   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:26.262407   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:26.262471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:26.297981   58316 cri.go:89] found id: ""
	I0520 11:47:26.298010   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.298022   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:26.298029   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:26.298094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:26.339873   58316 cri.go:89] found id: ""
	I0520 11:47:26.339901   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.339911   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:26.339918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:26.339983   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:26.376366   58316 cri.go:89] found id: ""
	I0520 11:47:26.376393   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.376404   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:26.376411   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:26.376471   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:26.410803   58316 cri.go:89] found id: ""
	I0520 11:47:26.410839   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.410851   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:26.410859   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:26.410921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:26.445392   58316 cri.go:89] found id: ""
	I0520 11:47:26.445420   58316 logs.go:276] 0 containers: []
	W0520 11:47:26.445432   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:26.445443   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:26.445458   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:26.499244   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:26.499276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:26.513224   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:26.513259   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:26.589623   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:26.589647   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:26.589661   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:26.676875   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:26.676918   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:23.055887   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:25.555044   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:24.863476   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:27.363148   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:27.158548   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:47:27.158572   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:47:27.158585   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:27.205830   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0520 11:47:27.205867   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0520 11:47:27.523283   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:27.527618   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:27.527643   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:28.023238   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:28.029601   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:28.029630   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:28.522849   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:28.528350   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:28.528372   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:29.023422   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:29.027460   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:29.027493   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:29.523739   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:29.528289   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:29.528317   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:30.022859   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:30.027324   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0520 11:47:30.027348   57519 api_server.go:103] status: https://192.168.39.185:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0520 11:47:30.522929   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:47:30.527596   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:47:30.534571   57519 api_server.go:141] control plane version: v1.30.1
	I0520 11:47:30.534592   57519 api_server.go:131] duration metric: took 6.511912744s to wait for apiserver health ...
	I0520 11:47:30.534601   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:47:30.534609   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:47:30.536647   57519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:47:30.537896   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:47:30.548779   57519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:47:30.568124   57519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:47:30.577489   57519 system_pods.go:59] 8 kube-system pods found
	I0520 11:47:30.577522   57519 system_pods.go:61] "coredns-7db6d8ff4d-tg5bl" [ec723a9c-933c-4d25-9b56-e39a78adbbb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0520 11:47:30.577530   57519 system_pods.go:61] "etcd-no-preload-814599" [d0e5ca15-cc81-4dc2-9bc4-0e9b88c7b49d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0520 11:47:30.577538   57519 system_pods.go:61] "kube-apiserver-no-preload-814599" [81f50182-5190-4168-b97a-f5a7a6348247] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0520 11:47:30.577544   57519 system_pods.go:61] "kube-controller-manager-no-preload-814599" [871504b1-6e24-498b-9563-241b7a7422d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0520 11:47:30.577550   57519 system_pods.go:61] "kube-proxy-mqg49" [0f3b3b30-9789-4a0b-b636-e6db16e1ce0d] Running
	I0520 11:47:30.577560   57519 system_pods.go:61] "kube-scheduler-no-preload-814599" [028942b5-5187-4b86-84ba-eba2cdb7a844] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0520 11:47:30.577567   57519 system_pods.go:61] "metrics-server-569cc877fc-v2zh9" [1be844eb-66ba-4a82-86fb-6f53de7e4d33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:47:30.577577   57519 system_pods.go:61] "storage-provisioner" [84c706cd-0245-4afa-bb7b-4644bd2784bd] Running
	I0520 11:47:30.577586   57519 system_pods.go:74] duration metric: took 9.439264ms to wait for pod list to return data ...
	I0520 11:47:30.577598   57519 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:47:30.581346   57519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:47:30.581367   57519 node_conditions.go:123] node cpu capacity is 2
	I0520 11:47:30.581377   57519 node_conditions.go:105] duration metric: took 3.772205ms to run NodePressure ...
	I0520 11:47:30.581398   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0520 11:47:30.849703   57519 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0520 11:47:30.853895   57519 kubeadm.go:733] kubelet initialised
	I0520 11:47:30.853917   57519 kubeadm.go:734] duration metric: took 4.191303ms waiting for restarted kubelet to initialise ...
	I0520 11:47:30.853925   57519 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:47:30.861807   57519 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.866768   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.866797   57519 pod_ready.go:81] duration metric: took 4.964994ms for pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.866809   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "coredns-7db6d8ff4d-tg5bl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.866819   57519 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.871001   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "etcd-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.871027   57519 pod_ready.go:81] duration metric: took 4.200278ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.871038   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "etcd-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.871046   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.875615   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "kube-apiserver-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.875635   57519 pod_ready.go:81] duration metric: took 4.580378ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.875645   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "kube-apiserver-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.875655   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:30.971186   57519 pod_ready.go:97] node "no-preload-814599" hosting pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.971210   57519 pod_ready.go:81] duration metric: took 95.545422ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	E0520 11:47:30.971219   57519 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-814599" hosting pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-814599" has status "Ready":"False"
	I0520 11:47:30.971226   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:29.250626   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:29.265658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:29.265716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:29.307618   58316 cri.go:89] found id: ""
	I0520 11:47:29.307640   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.307651   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:29.307658   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:29.307717   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:29.351325   58316 cri.go:89] found id: ""
	I0520 11:47:29.351353   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.351364   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:29.351372   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:29.351444   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:29.392415   58316 cri.go:89] found id: ""
	I0520 11:47:29.392439   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.392449   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:29.392455   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:29.392501   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:29.435195   58316 cri.go:89] found id: ""
	I0520 11:47:29.435220   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.435227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:29.435233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:29.435287   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:29.476746   58316 cri.go:89] found id: ""
	I0520 11:47:29.476773   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.476784   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:29.476790   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:29.476875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:29.511513   58316 cri.go:89] found id: ""
	I0520 11:47:29.511545   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.511553   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:29.511559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:29.511621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:29.547563   58316 cri.go:89] found id: ""
	I0520 11:47:29.547583   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.547590   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:29.547598   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:29.547644   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:29.586438   58316 cri.go:89] found id: ""
	I0520 11:47:29.586464   58316 logs.go:276] 0 containers: []
	W0520 11:47:29.586472   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:29.586480   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:29.586491   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:29.660795   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:29.660828   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:29.699538   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:29.699568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:29.751675   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:29.751711   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:29.766924   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:29.766960   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:29.840738   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:27.555356   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:29.556470   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:29.862807   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:32.364158   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:31.372203   57519 pod_ready.go:92] pod "kube-proxy-mqg49" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:31.372223   57519 pod_ready.go:81] duration metric: took 400.985089ms for pod "kube-proxy-mqg49" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:31.372231   57519 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:33.379905   57519 pod_ready.go:102] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:35.378764   57519 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:47:35.378793   57519 pod_ready.go:81] duration metric: took 4.006551156s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:35.378804   57519 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" ...
	I0520 11:47:32.341347   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:32.355727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:32.355784   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:32.399703   58316 cri.go:89] found id: ""
	I0520 11:47:32.399740   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.399749   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:32.399754   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:32.399813   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:32.440321   58316 cri.go:89] found id: ""
	I0520 11:47:32.440350   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.440362   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:32.440367   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:32.440427   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:32.477813   58316 cri.go:89] found id: ""
	I0520 11:47:32.477840   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.477848   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:32.477853   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:32.477899   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:32.524465   58316 cri.go:89] found id: ""
	I0520 11:47:32.524492   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.524507   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:32.524516   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:32.524574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:32.559287   58316 cri.go:89] found id: ""
	I0520 11:47:32.559316   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.559326   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:32.559334   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:32.559390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:32.595977   58316 cri.go:89] found id: ""
	I0520 11:47:32.596010   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.596022   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:32.596030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:32.596088   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:32.631700   58316 cri.go:89] found id: ""
	I0520 11:47:32.631730   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.631739   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:32.631746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:32.631818   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:32.668027   58316 cri.go:89] found id: ""
	I0520 11:47:32.668062   58316 logs.go:276] 0 containers: []
	W0520 11:47:32.668074   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:32.668084   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:32.668097   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:32.719910   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:32.719940   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:32.733988   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:32.734011   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:32.805852   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:32.805871   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:32.805886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:32.884785   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:32.884814   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:35.425234   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:35.438512   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:35.438583   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:35.471964   58316 cri.go:89] found id: ""
	I0520 11:47:35.471989   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.471997   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:35.472003   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:35.472058   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:35.505663   58316 cri.go:89] found id: ""
	I0520 11:47:35.505695   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.505706   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:35.505714   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:35.505766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:35.540407   58316 cri.go:89] found id: ""
	I0520 11:47:35.540437   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.540447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:35.540454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:35.540516   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:35.576599   58316 cri.go:89] found id: ""
	I0520 11:47:35.576622   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.576629   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:35.576634   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:35.576689   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:35.610257   58316 cri.go:89] found id: ""
	I0520 11:47:35.610286   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.610297   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:35.610304   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:35.610367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:35.647305   58316 cri.go:89] found id: ""
	I0520 11:47:35.647330   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.647338   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:35.647344   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:35.647393   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:35.682149   58316 cri.go:89] found id: ""
	I0520 11:47:35.682180   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.682191   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:35.682198   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:35.682260   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:35.717536   58316 cri.go:89] found id: ""
	I0520 11:47:35.717561   58316 logs.go:276] 0 containers: []
	W0520 11:47:35.717569   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:35.717576   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:35.717588   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:35.768734   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:35.768763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:35.783246   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:35.783276   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:35.869299   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:35.869325   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:35.869343   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:35.948083   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:35.948116   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:32.056046   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:34.555606   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:36.556231   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:34.862916   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:37.363328   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:37.385861   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:39.884432   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:38.498413   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:38.512549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:38.512627   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:38.545580   58316 cri.go:89] found id: ""
	I0520 11:47:38.545601   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.545608   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:38.545614   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:38.545660   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.583115   58316 cri.go:89] found id: ""
	I0520 11:47:38.583145   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.583156   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:38.583163   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:38.583210   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:38.619548   58316 cri.go:89] found id: ""
	I0520 11:47:38.619578   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.619588   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:38.619596   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:38.619664   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:38.664240   58316 cri.go:89] found id: ""
	I0520 11:47:38.664272   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.664282   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:38.664290   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:38.664347   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:38.703438   58316 cri.go:89] found id: ""
	I0520 11:47:38.703460   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.703468   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:38.703472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:38.703546   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:38.740712   58316 cri.go:89] found id: ""
	I0520 11:47:38.740742   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.740754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:38.740761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:38.740821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:38.785274   58316 cri.go:89] found id: ""
	I0520 11:47:38.785296   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.785305   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:38.785312   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:38.785370   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:38.827757   58316 cri.go:89] found id: ""
	I0520 11:47:38.827787   58316 logs.go:276] 0 containers: []
	W0520 11:47:38.827798   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:38.827810   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:38.827825   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:38.880074   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:38.880105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:38.894643   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:38.894688   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:38.982821   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:38.982849   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:38.982872   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:39.066426   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:39.066457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:41.610439   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:41.623306   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:41.623374   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:41.662703   58316 cri.go:89] found id: ""
	I0520 11:47:41.662731   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.662740   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:41.662747   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:41.662803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:38.557813   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.054616   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:39.863318   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:42.362965   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.893841   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:44.387666   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:41.699406   58316 cri.go:89] found id: ""
	I0520 11:47:41.699433   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.699443   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:41.699449   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:41.699493   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:41.733448   58316 cri.go:89] found id: ""
	I0520 11:47:41.733475   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.733485   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:41.733492   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:41.733556   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:41.782101   58316 cri.go:89] found id: ""
	I0520 11:47:41.782131   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.782142   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:41.782149   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:41.782213   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:41.828303   58316 cri.go:89] found id: ""
	I0520 11:47:41.828332   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.828343   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:41.828350   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:41.828438   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:41.865690   58316 cri.go:89] found id: ""
	I0520 11:47:41.865712   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.865721   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:41.865726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:41.865773   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:41.907954   58316 cri.go:89] found id: ""
	I0520 11:47:41.907982   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.907992   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:41.907999   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:41.908060   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:41.946853   58316 cri.go:89] found id: ""
	I0520 11:47:41.946879   58316 logs.go:276] 0 containers: []
	W0520 11:47:41.946889   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:41.946901   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:41.946916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:42.001481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:42.001516   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:42.015383   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:42.015412   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:42.088155   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:42.088184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:42.088201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:42.172771   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:42.172801   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:44.714665   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:44.727782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:44.727866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:44.760377   58316 cri.go:89] found id: ""
	I0520 11:47:44.760405   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.760412   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:44.760418   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:44.760466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:44.793488   58316 cri.go:89] found id: ""
	I0520 11:47:44.793509   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.793517   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:44.793522   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:44.793570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:44.827752   58316 cri.go:89] found id: ""
	I0520 11:47:44.827777   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.827786   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:44.827792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:44.827840   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:44.864415   58316 cri.go:89] found id: ""
	I0520 11:47:44.864437   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.864446   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:44.864454   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:44.864514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:44.904360   58316 cri.go:89] found id: ""
	I0520 11:47:44.904382   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.904391   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:44.904396   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:44.904452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:44.939969   58316 cri.go:89] found id: ""
	I0520 11:47:44.940000   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.940010   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:44.940019   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:44.940069   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:44.979521   58316 cri.go:89] found id: ""
	I0520 11:47:44.979544   58316 logs.go:276] 0 containers: []
	W0520 11:47:44.979552   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:44.979557   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:44.979621   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:45.012946   58316 cri.go:89] found id: ""
	I0520 11:47:45.012979   58316 logs.go:276] 0 containers: []
	W0520 11:47:45.012992   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:45.013002   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:45.013015   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:45.067582   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:45.067619   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:45.081497   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:45.081529   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:45.155312   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:45.155335   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:45.155351   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:45.232403   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:45.232439   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:43.055052   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:45.056688   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:44.863217   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:47.362713   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:46.886365   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.384568   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:47.774002   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:47.789020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:47.789078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:47.826284   58316 cri.go:89] found id: ""
	I0520 11:47:47.826319   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.826331   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:47.826340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:47.826404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:47.866588   58316 cri.go:89] found id: ""
	I0520 11:47:47.866610   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.866616   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:47.866621   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:47.866675   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:47.901556   58316 cri.go:89] found id: ""
	I0520 11:47:47.901582   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.901593   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:47.901600   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:47.901658   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:47.937563   58316 cri.go:89] found id: ""
	I0520 11:47:47.937586   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.937593   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:47.937599   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:47.937656   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:47.975493   58316 cri.go:89] found id: ""
	I0520 11:47:47.975517   58316 logs.go:276] 0 containers: []
	W0520 11:47:47.975527   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:47.975535   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:47.975594   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:48.012208   58316 cri.go:89] found id: ""
	I0520 11:47:48.012232   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.012239   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:48.012245   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:48.012303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:48.046951   58316 cri.go:89] found id: ""
	I0520 11:47:48.046979   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.046989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:48.046996   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:48.047057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:48.085611   58316 cri.go:89] found id: ""
	I0520 11:47:48.085639   58316 logs.go:276] 0 containers: []
	W0520 11:47:48.085649   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:48.085659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:48.085673   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:48.159338   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:48.159359   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:48.159373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:48.238332   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:48.238362   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:48.282316   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:48.282346   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:48.333780   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:48.333811   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:50.849007   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:50.863710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:50.863766   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:50.900260   58316 cri.go:89] found id: ""
	I0520 11:47:50.900281   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.900287   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:50.900294   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:50.900342   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:50.934796   58316 cri.go:89] found id: ""
	I0520 11:47:50.934817   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.934824   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:50.934830   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:50.934875   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:50.972957   58316 cri.go:89] found id: ""
	I0520 11:47:50.972983   58316 logs.go:276] 0 containers: []
	W0520 11:47:50.972992   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:50.973000   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:50.973073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:51.009354   58316 cri.go:89] found id: ""
	I0520 11:47:51.009392   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.009402   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:51.009410   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:51.009467   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:51.046268   58316 cri.go:89] found id: ""
	I0520 11:47:51.046297   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.046308   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:51.046316   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:51.046379   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:51.083513   58316 cri.go:89] found id: ""
	I0520 11:47:51.083538   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.083548   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:51.083554   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:51.083619   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:51.126736   58316 cri.go:89] found id: ""
	I0520 11:47:51.126761   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.126778   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:51.126784   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:51.126841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:51.166149   58316 cri.go:89] found id: ""
	I0520 11:47:51.166176   58316 logs.go:276] 0 containers: []
	W0520 11:47:51.166187   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:51.166196   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:51.166207   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:51.218946   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:51.218989   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:51.233779   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:51.233813   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:51.312266   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:51.312289   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:51.312306   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:51.393884   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:51.393911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:47.056743   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.555489   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:49.363267   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:51.863214   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:51.385469   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:53.884960   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:55.885430   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:53.939605   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:53.952518   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:53.952585   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:53.995262   58316 cri.go:89] found id: ""
	I0520 11:47:53.995291   58316 logs.go:276] 0 containers: []
	W0520 11:47:53.995302   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:53.995309   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:53.995372   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:54.029357   58316 cri.go:89] found id: ""
	I0520 11:47:54.029379   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.029387   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:54.029393   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:54.029441   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:54.064902   58316 cri.go:89] found id: ""
	I0520 11:47:54.064937   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.064947   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:54.064954   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:54.065014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:54.102763   58316 cri.go:89] found id: ""
	I0520 11:47:54.102788   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.102795   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:54.102801   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:54.102866   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:54.136345   58316 cri.go:89] found id: ""
	I0520 11:47:54.136373   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.136384   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:54.136392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:54.136456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:54.173106   58316 cri.go:89] found id: ""
	I0520 11:47:54.173135   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.173144   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:54.173151   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:54.173214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:54.209199   58316 cri.go:89] found id: ""
	I0520 11:47:54.209222   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.209229   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:54.209235   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:54.209301   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:54.247681   58316 cri.go:89] found id: ""
	I0520 11:47:54.247711   58316 logs.go:276] 0 containers: []
	W0520 11:47:54.247722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:54.247731   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:54.247749   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:54.325604   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:54.325623   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:54.325635   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:54.405791   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:54.405832   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:54.445633   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:54.445671   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:47:54.495395   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:54.495424   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:52.055220   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:54.059922   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:56.554742   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:54.362355   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:56.862720   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:57.886732   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:00.387613   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:57.011083   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:47:57.025109   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:47:57.025171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:47:57.059268   58316 cri.go:89] found id: ""
	I0520 11:47:57.059299   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.059310   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:47:57.059318   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:47:57.059371   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:47:57.093744   58316 cri.go:89] found id: ""
	I0520 11:47:57.093770   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.093780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:47:57.093788   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:47:57.093854   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:47:57.128836   58316 cri.go:89] found id: ""
	I0520 11:47:57.128868   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.128879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:47:57.128886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:47:57.128967   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:47:57.163858   58316 cri.go:89] found id: ""
	I0520 11:47:57.163882   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.163890   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:47:57.163896   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:47:57.163951   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:47:57.198684   58316 cri.go:89] found id: ""
	I0520 11:47:57.198713   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.198724   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:47:57.198732   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:47:57.198794   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:47:57.235121   58316 cri.go:89] found id: ""
	I0520 11:47:57.235147   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.235155   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:47:57.235161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:47:57.235208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:47:57.279621   58316 cri.go:89] found id: ""
	I0520 11:47:57.279641   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.279650   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:47:57.279655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:47:57.279701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:47:57.316292   58316 cri.go:89] found id: ""
	I0520 11:47:57.316319   58316 logs.go:276] 0 containers: []
	W0520 11:47:57.316329   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:47:57.316338   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:47:57.316350   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:47:57.330013   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:47:57.330037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:47:57.405736   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:57.405762   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:47:57.405776   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:47:57.483126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:47:57.483160   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:47:57.527345   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:47:57.527375   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.081615   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:00.094782   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:00.094841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:00.133435   58316 cri.go:89] found id: ""
	I0520 11:48:00.133470   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.133486   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:00.133492   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:00.133553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:00.169402   58316 cri.go:89] found id: ""
	I0520 11:48:00.169434   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.169445   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:00.169453   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:00.169515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:00.204702   58316 cri.go:89] found id: ""
	I0520 11:48:00.204728   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.204738   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:00.204745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:00.204803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:00.246341   58316 cri.go:89] found id: ""
	I0520 11:48:00.246373   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.246383   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:00.246391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:00.246452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:00.279937   58316 cri.go:89] found id: ""
	I0520 11:48:00.279961   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.279968   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:00.279973   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:00.280021   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:00.314825   58316 cri.go:89] found id: ""
	I0520 11:48:00.314853   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.314864   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:00.314870   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:00.314937   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:00.354431   58316 cri.go:89] found id: ""
	I0520 11:48:00.354460   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.354471   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:00.354478   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:00.354538   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:00.391309   58316 cri.go:89] found id: ""
	I0520 11:48:00.391335   58316 logs.go:276] 0 containers: []
	W0520 11:48:00.391342   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:00.391350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:00.391361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:00.471502   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:00.471535   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:00.508324   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:00.508352   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:00.559695   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:00.559728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:00.573677   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:00.573709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:00.648102   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:47:58.558508   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:01.055219   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:47:58.862939   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:01.362135   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:02.886809   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:04.887274   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:03.148336   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:03.162792   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:03.162922   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:03.202620   58316 cri.go:89] found id: ""
	I0520 11:48:03.202656   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.202667   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:03.202674   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:03.202732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:03.243434   58316 cri.go:89] found id: ""
	I0520 11:48:03.243460   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.243468   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:03.243474   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:03.243528   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:03.278517   58316 cri.go:89] found id: ""
	I0520 11:48:03.278545   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.278553   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:03.278560   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:03.278624   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:03.317253   58316 cri.go:89] found id: ""
	I0520 11:48:03.317281   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.317291   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:03.317298   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:03.317360   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:03.352394   58316 cri.go:89] found id: ""
	I0520 11:48:03.352416   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.352425   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:03.352431   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:03.352480   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:03.393016   58316 cri.go:89] found id: ""
	I0520 11:48:03.393044   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.393055   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:03.393063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:03.393134   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:03.434436   58316 cri.go:89] found id: ""
	I0520 11:48:03.434461   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.434468   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:03.434474   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:03.434531   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:03.476703   58316 cri.go:89] found id: ""
	I0520 11:48:03.476728   58316 logs.go:276] 0 containers: []
	W0520 11:48:03.476737   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:03.476746   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:03.476761   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:03.529959   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:03.529996   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:03.544741   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:03.544770   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:03.614526   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:03.614552   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:03.614568   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:03.695336   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:03.695368   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:06.233916   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:06.249026   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:06.249083   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:06.288191   58316 cri.go:89] found id: ""
	I0520 11:48:06.288223   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.288233   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:06.288246   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:06.288307   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:06.328602   58316 cri.go:89] found id: ""
	I0520 11:48:06.328639   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.328647   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:06.328653   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:06.328710   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:06.373030   58316 cri.go:89] found id: ""
	I0520 11:48:06.373053   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.373062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:06.373069   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:06.373130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:06.410520   58316 cri.go:89] found id: ""
	I0520 11:48:06.410542   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.410551   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:06.410559   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:06.410625   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:06.448468   58316 cri.go:89] found id: ""
	I0520 11:48:06.448493   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.448505   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:06.448511   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:06.448571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:06.507441   58316 cri.go:89] found id: ""
	I0520 11:48:06.507468   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.507477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:06.507485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:06.507545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:06.547174   58316 cri.go:89] found id: ""
	I0520 11:48:06.547201   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.547212   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:06.547220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:06.547283   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:06.580692   58316 cri.go:89] found id: ""
	I0520 11:48:06.580715   58316 logs.go:276] 0 containers: []
	W0520 11:48:06.580722   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:06.580730   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:06.580741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:06.634822   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:06.634855   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:06.649107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:06.649132   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:48:03.555849   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:06.059026   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:03.363379   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:05.863603   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:07.386458   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:09.884793   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	W0520 11:48:06.717523   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:06.717548   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:06.717562   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:06.806538   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:06.806571   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:09.347195   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:09.362597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:09.362677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:09.401314   58316 cri.go:89] found id: ""
	I0520 11:48:09.401338   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.401346   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:09.401353   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:09.401411   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:09.436394   58316 cri.go:89] found id: ""
	I0520 11:48:09.436418   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.436426   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:09.436431   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:09.436484   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:09.470407   58316 cri.go:89] found id: ""
	I0520 11:48:09.470437   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.470447   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:09.470462   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:09.470514   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:09.507720   58316 cri.go:89] found id: ""
	I0520 11:48:09.507747   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.507758   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:09.507763   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:09.507824   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:09.548056   58316 cri.go:89] found id: ""
	I0520 11:48:09.548084   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.548093   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:09.548106   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:09.548172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:09.585187   58316 cri.go:89] found id: ""
	I0520 11:48:09.585214   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.585222   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:09.585227   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:09.585285   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:09.622695   58316 cri.go:89] found id: ""
	I0520 11:48:09.622718   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.622725   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:09.622731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:09.622796   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:09.662751   58316 cri.go:89] found id: ""
	I0520 11:48:09.662775   58316 logs.go:276] 0 containers: []
	W0520 11:48:09.662784   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:09.662792   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:09.662804   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:09.715669   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:09.715709   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:09.729301   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:09.729333   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:09.799189   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:09.799213   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:09.799228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:09.885889   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:09.885916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:08.554935   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:10.555580   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:08.362397   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:10.863960   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:12.385193   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:14.385749   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:12.431556   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:12.445470   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:12.445545   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:12.482161   58316 cri.go:89] found id: ""
	I0520 11:48:12.482186   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.482195   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:12.482201   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:12.482251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:12.520051   58316 cri.go:89] found id: ""
	I0520 11:48:12.520077   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.520090   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:12.520096   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:12.520143   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:12.559856   58316 cri.go:89] found id: ""
	I0520 11:48:12.559878   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.559886   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:12.559891   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:12.559947   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:12.599657   58316 cri.go:89] found id: ""
	I0520 11:48:12.599679   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.599687   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:12.599693   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:12.599739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:12.640514   58316 cri.go:89] found id: ""
	I0520 11:48:12.640539   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.640547   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:12.640552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:12.640601   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:12.680337   58316 cri.go:89] found id: ""
	I0520 11:48:12.680367   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.680378   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:12.680384   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:12.680433   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:12.718580   58316 cri.go:89] found id: ""
	I0520 11:48:12.718615   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.718624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:12.718629   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:12.718677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:12.760438   58316 cri.go:89] found id: ""
	I0520 11:48:12.760463   58316 logs.go:276] 0 containers: []
	W0520 11:48:12.760471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:12.760479   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:12.760493   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:12.816564   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:12.816599   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:12.831107   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:12.831130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:12.931165   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:12.931184   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:12.931199   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:13.035436   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:13.035478   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:15.578999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:15.593185   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:15.593280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:15.629644   58316 cri.go:89] found id: ""
	I0520 11:48:15.629672   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.629682   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:15.629689   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:15.629749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:15.668639   58316 cri.go:89] found id: ""
	I0520 11:48:15.668670   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.668680   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:15.668688   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:15.668752   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:15.712844   58316 cri.go:89] found id: ""
	I0520 11:48:15.712868   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.712876   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:15.712881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:15.712954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:15.748969   58316 cri.go:89] found id: ""
	I0520 11:48:15.748998   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.749008   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:15.749020   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:15.749085   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:15.784625   58316 cri.go:89] found id: ""
	I0520 11:48:15.784658   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.784666   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:15.784672   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:15.784724   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:15.825288   58316 cri.go:89] found id: ""
	I0520 11:48:15.825319   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.825326   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:15.825332   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:15.825390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:15.861589   58316 cri.go:89] found id: ""
	I0520 11:48:15.861617   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.861628   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:15.861635   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:15.861691   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:15.897115   58316 cri.go:89] found id: ""
	I0520 11:48:15.897143   58316 logs.go:276] 0 containers: []
	W0520 11:48:15.897155   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:15.897166   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:15.897188   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:15.971286   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:15.971307   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:15.971323   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:16.049567   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:16.049612   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:16.090116   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:16.090145   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:16.140151   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:16.140187   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:13.058312   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:15.556249   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:13.363188   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:15.363224   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:17.861944   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:16.884628   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:18.885335   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:20.887306   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:18.655048   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:18.671663   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:18.671720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:18.721288   58316 cri.go:89] found id: ""
	I0520 11:48:18.721315   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.721333   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:18.721340   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:18.721407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:18.765025   58316 cri.go:89] found id: ""
	I0520 11:48:18.765054   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.765064   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:18.765069   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:18.765135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:18.825054   58316 cri.go:89] found id: ""
	I0520 11:48:18.825083   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.825094   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:18.825110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:18.825172   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:18.861528   58316 cri.go:89] found id: ""
	I0520 11:48:18.861550   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.861560   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:18.861566   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:18.861630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:18.899943   58316 cri.go:89] found id: ""
	I0520 11:48:18.899970   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.899980   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:18.899986   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:18.900063   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:18.936465   58316 cri.go:89] found id: ""
	I0520 11:48:18.936497   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.936507   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:18.936514   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:18.936570   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:18.970599   58316 cri.go:89] found id: ""
	I0520 11:48:18.970636   58316 logs.go:276] 0 containers: []
	W0520 11:48:18.970645   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:18.970653   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:18.970718   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:19.004507   58316 cri.go:89] found id: ""
	I0520 11:48:19.004538   58316 logs.go:276] 0 containers: []
	W0520 11:48:19.004547   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:19.004558   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:19.004573   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:19.057482   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:19.057521   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:19.072189   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:19.072211   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:19.148331   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:19.148356   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:19.148373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:19.228421   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:19.228456   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:18.055244   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:20.056597   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:19.862408   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:21.862760   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:23.386334   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:25.388116   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:21.771146   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:21.786547   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:21.786620   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:21.829816   58316 cri.go:89] found id: ""
	I0520 11:48:21.829846   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.829858   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:21.829865   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:21.829925   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:21.867480   58316 cri.go:89] found id: ""
	I0520 11:48:21.867502   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.867510   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:21.867515   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:21.867571   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:21.904737   58316 cri.go:89] found id: ""
	I0520 11:48:21.904760   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.904767   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:21.904772   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:21.904841   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:21.940431   58316 cri.go:89] found id: ""
	I0520 11:48:21.940457   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.940465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:21.940482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:21.940543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:21.976054   58316 cri.go:89] found id: ""
	I0520 11:48:21.976087   58316 logs.go:276] 0 containers: []
	W0520 11:48:21.976094   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:21.976100   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:21.976158   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:22.020035   58316 cri.go:89] found id: ""
	I0520 11:48:22.020104   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.020116   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:22.020123   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:22.020189   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:22.060981   58316 cri.go:89] found id: ""
	I0520 11:48:22.061002   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.061010   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:22.061015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:22.061059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:22.102854   58316 cri.go:89] found id: ""
	I0520 11:48:22.102875   58316 logs.go:276] 0 containers: []
	W0520 11:48:22.102882   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:22.102892   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:22.102906   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:22.158901   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:22.158933   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.172825   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:22.172850   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:22.246602   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:22.246625   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:22.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:22.325862   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:22.325899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:24.868449   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:24.882851   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:24.882921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:24.922574   58316 cri.go:89] found id: ""
	I0520 11:48:24.922600   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.922610   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:24.922617   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:24.922677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:24.958786   58316 cri.go:89] found id: ""
	I0520 11:48:24.958817   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.958845   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:24.958853   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:24.958906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:24.993346   58316 cri.go:89] found id: ""
	I0520 11:48:24.993375   58316 logs.go:276] 0 containers: []
	W0520 11:48:24.993384   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:24.993391   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:24.993450   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:25.026840   58316 cri.go:89] found id: ""
	I0520 11:48:25.026863   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.026870   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:25.026875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:25.026920   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:25.065675   58316 cri.go:89] found id: ""
	I0520 11:48:25.065710   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.065719   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:25.065727   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:25.065791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:25.100955   58316 cri.go:89] found id: ""
	I0520 11:48:25.100986   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.100997   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:25.101004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:25.101062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:25.134516   58316 cri.go:89] found id: ""
	I0520 11:48:25.134544   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.134554   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:25.134562   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:25.134626   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:25.174455   58316 cri.go:89] found id: ""
	I0520 11:48:25.174482   58316 logs.go:276] 0 containers: []
	W0520 11:48:25.174491   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:25.174501   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:25.174527   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:25.245557   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:25.245579   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:25.245592   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:25.324197   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:25.324228   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:25.362963   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:25.362995   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:25.420778   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:25.420807   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:22.555469   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:24.556030   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:24.362744   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:26.865098   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:27.884029   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.885238   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:27.935757   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:27.949787   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:27.949873   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:27.985142   58316 cri.go:89] found id: ""
	I0520 11:48:27.985174   58316 logs.go:276] 0 containers: []
	W0520 11:48:27.985185   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:27.985191   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:27.985251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:28.020044   58316 cri.go:89] found id: ""
	I0520 11:48:28.020074   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.020082   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:28.020088   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:28.020135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:28.057075   58316 cri.go:89] found id: ""
	I0520 11:48:28.057114   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.057125   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:28.057136   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:28.057182   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:28.093505   58316 cri.go:89] found id: ""
	I0520 11:48:28.093527   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.093534   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:28.093540   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:28.093596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:28.129402   58316 cri.go:89] found id: ""
	I0520 11:48:28.129431   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.129440   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:28.129447   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:28.129505   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:28.166558   58316 cri.go:89] found id: ""
	I0520 11:48:28.166582   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.166590   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:28.166601   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:28.166649   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:28.206833   58316 cri.go:89] found id: ""
	I0520 11:48:28.206863   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.206873   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:28.206881   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:28.206943   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:28.246571   58316 cri.go:89] found id: ""
	I0520 11:48:28.246605   58316 logs.go:276] 0 containers: []
	W0520 11:48:28.246617   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:28.246628   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:28.246641   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:28.297849   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:28.297882   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:28.311578   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:28.311603   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:28.387379   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:28.387399   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:28.387410   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:28.466506   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:28.466540   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.007382   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:31.021671   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:31.021751   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:31.059965   58316 cri.go:89] found id: ""
	I0520 11:48:31.059987   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.059996   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:31.060004   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:31.060062   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:31.095664   58316 cri.go:89] found id: ""
	I0520 11:48:31.095686   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.095693   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:31.095699   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:31.095749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:31.131510   58316 cri.go:89] found id: ""
	I0520 11:48:31.131531   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.131538   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:31.131544   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:31.131596   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:31.166962   58316 cri.go:89] found id: ""
	I0520 11:48:31.167000   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.167011   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:31.167018   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:31.167082   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:31.202831   58316 cri.go:89] found id: ""
	I0520 11:48:31.202856   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.202863   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:31.202868   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:31.202927   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:31.239365   58316 cri.go:89] found id: ""
	I0520 11:48:31.239397   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.239409   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:31.239417   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:31.239477   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:31.276613   58316 cri.go:89] found id: ""
	I0520 11:48:31.276641   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.276651   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:31.276658   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:31.276720   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:31.312008   58316 cri.go:89] found id: ""
	I0520 11:48:31.312040   58316 logs.go:276] 0 containers: []
	W0520 11:48:31.312047   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:31.312056   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:31.312066   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:31.394355   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:31.394383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:31.439251   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:31.439277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:31.494313   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:31.494341   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:31.508675   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:31.508698   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:31.576022   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:27.054447   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.056075   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.556385   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:29.368181   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.862789   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:31.885337   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:33.885831   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:34.076333   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:34.089640   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:34.089704   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:34.124848   58316 cri.go:89] found id: ""
	I0520 11:48:34.124869   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.124876   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:34.124881   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:34.124948   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:34.159885   58316 cri.go:89] found id: ""
	I0520 11:48:34.159911   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.159920   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:34.159925   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:34.159988   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:34.194029   58316 cri.go:89] found id: ""
	I0520 11:48:34.194051   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.194058   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:34.194063   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:34.194135   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:34.231412   58316 cri.go:89] found id: ""
	I0520 11:48:34.231435   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.231442   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:34.231448   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:34.231497   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:34.268679   58316 cri.go:89] found id: ""
	I0520 11:48:34.268709   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.268721   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:34.268729   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:34.268795   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:34.313585   58316 cri.go:89] found id: ""
	I0520 11:48:34.313607   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.313613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:34.313618   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:34.313681   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:34.348685   58316 cri.go:89] found id: ""
	I0520 11:48:34.348712   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.348723   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:34.348730   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:34.348788   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:34.393438   58316 cri.go:89] found id: ""
	I0520 11:48:34.393462   58316 logs.go:276] 0 containers: []
	W0520 11:48:34.393471   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:34.393480   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:34.393495   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:34.468828   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:34.468859   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:34.468877   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:34.553796   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:34.553827   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:34.594968   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:34.594992   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:34.647814   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:34.647847   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:34.055168   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.056035   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:34.362591   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.863645   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:36.385081   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:38.884785   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:40.886065   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:37.161573   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:37.176133   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:37.176214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:37.214694   58316 cri.go:89] found id: ""
	I0520 11:48:37.214719   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.214729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:37.214736   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:37.214803   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:37.259745   58316 cri.go:89] found id: ""
	I0520 11:48:37.259772   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.259783   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:37.259790   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:37.259852   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:37.299887   58316 cri.go:89] found id: ""
	I0520 11:48:37.299920   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.299931   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:37.299938   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:37.300003   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:37.337428   58316 cri.go:89] found id: ""
	I0520 11:48:37.337456   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.337465   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:37.337472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:37.337532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:37.379635   58316 cri.go:89] found id: ""
	I0520 11:48:37.379662   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.379672   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:37.379680   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:37.379739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:37.421071   58316 cri.go:89] found id: ""
	I0520 11:48:37.421104   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.421112   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:37.421118   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:37.421164   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:37.460130   58316 cri.go:89] found id: ""
	I0520 11:48:37.460154   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.460162   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:37.460168   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:37.460225   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:37.497028   58316 cri.go:89] found id: ""
	I0520 11:48:37.497053   58316 logs.go:276] 0 containers: []
	W0520 11:48:37.497063   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:37.497074   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:37.497089   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:37.550632   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:37.550659   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:37.566172   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:37.566201   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:37.645132   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:37.645150   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:37.645163   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:37.720923   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:37.720966   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:40.260330   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:40.275233   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:40.275294   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:40.310450   58316 cri.go:89] found id: ""
	I0520 11:48:40.310479   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.310488   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:40.310494   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:40.310543   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:40.346171   58316 cri.go:89] found id: ""
	I0520 11:48:40.346200   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.346211   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:40.346217   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:40.346278   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:40.382644   58316 cri.go:89] found id: ""
	I0520 11:48:40.382666   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.382676   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:40.382684   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:40.382739   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:40.422950   58316 cri.go:89] found id: ""
	I0520 11:48:40.422974   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.422984   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:40.422990   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:40.423050   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:40.458818   58316 cri.go:89] found id: ""
	I0520 11:48:40.458864   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.458876   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:40.458884   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:40.458935   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:40.496438   58316 cri.go:89] found id: ""
	I0520 11:48:40.496462   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.496471   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:40.496476   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:40.496530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:40.532432   58316 cri.go:89] found id: ""
	I0520 11:48:40.532457   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.532465   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:40.532471   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:40.532530   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:40.572628   58316 cri.go:89] found id: ""
	I0520 11:48:40.572652   58316 logs.go:276] 0 containers: []
	W0520 11:48:40.572660   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:40.572667   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:40.572678   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:40.623158   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:40.623189   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:40.636609   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:40.636633   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:40.710836   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:40.710866   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:40.710880   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:40.795386   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:40.795420   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:38.556294   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:40.556701   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:39.362479   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:41.861916   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.387356   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:45.884457   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.334999   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:43.348481   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:43.348553   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:43.385528   58316 cri.go:89] found id: ""
	I0520 11:48:43.385554   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.385564   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:43.385571   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:43.385630   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:43.424628   58316 cri.go:89] found id: ""
	I0520 11:48:43.424659   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.424676   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:43.424685   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:43.424746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:43.464043   58316 cri.go:89] found id: ""
	I0520 11:48:43.464075   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.464091   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:43.464097   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:43.464171   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:43.501860   58316 cri.go:89] found id: ""
	I0520 11:48:43.501889   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.501899   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:43.501907   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:43.501970   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:43.541881   58316 cri.go:89] found id: ""
	I0520 11:48:43.541910   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.541919   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:43.541926   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:43.541992   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:43.578759   58316 cri.go:89] found id: ""
	I0520 11:48:43.578782   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.578789   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:43.578794   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:43.578842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:43.618012   58316 cri.go:89] found id: ""
	I0520 11:48:43.618042   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.618050   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:43.618058   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:43.618117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.654219   58316 cri.go:89] found id: ""
	I0520 11:48:43.654243   58316 logs.go:276] 0 containers: []
	W0520 11:48:43.654250   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:43.654258   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:43.654273   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:43.667782   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:43.667805   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:43.740111   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:43.740138   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:43.740155   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:43.823936   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:43.823974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:43.865045   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:43.865070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.421300   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:46.434802   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:46.434878   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:46.469361   58316 cri.go:89] found id: ""
	I0520 11:48:46.469392   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.469401   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:46.469407   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:46.469460   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:46.503922   58316 cri.go:89] found id: ""
	I0520 11:48:46.503950   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.503960   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:46.503968   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:46.504025   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:46.542916   58316 cri.go:89] found id: ""
	I0520 11:48:46.542942   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.542950   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:46.542955   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:46.543011   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:46.581532   58316 cri.go:89] found id: ""
	I0520 11:48:46.581550   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.581558   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:46.581563   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:46.581623   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:46.618650   58316 cri.go:89] found id: ""
	I0520 11:48:46.618677   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.618684   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:46.618689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:46.618740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:46.655409   58316 cri.go:89] found id: ""
	I0520 11:48:46.655431   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.655439   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:46.655445   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:46.655491   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:46.690494   58316 cri.go:89] found id: ""
	I0520 11:48:46.690519   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.690526   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:46.690531   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:46.690579   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:43.056625   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:45.555575   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:43.864183   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:46.362155   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:47.885261   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:49.886656   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:46.724684   58316 cri.go:89] found id: ""
	I0520 11:48:46.724715   58316 logs.go:276] 0 containers: []
	W0520 11:48:46.724725   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:46.724737   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:46.724752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:46.807899   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:46.807931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:46.850852   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:46.850883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:46.903474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:46.903502   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:46.917073   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:46.917105   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:47.006606   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.507632   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:49.521080   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:49.521160   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:49.558639   58316 cri.go:89] found id: ""
	I0520 11:48:49.558671   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.558678   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:49.558684   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:49.558732   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:49.595470   58316 cri.go:89] found id: ""
	I0520 11:48:49.595497   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.595506   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:49.595513   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:49.595573   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:49.630164   58316 cri.go:89] found id: ""
	I0520 11:48:49.630193   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.630204   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:49.630211   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:49.630272   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:49.664835   58316 cri.go:89] found id: ""
	I0520 11:48:49.664863   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.664873   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:49.664882   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:49.664954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:49.700428   58316 cri.go:89] found id: ""
	I0520 11:48:49.700483   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.700497   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:49.700504   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:49.700569   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:49.737184   58316 cri.go:89] found id: ""
	I0520 11:48:49.737216   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.737226   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:49.737234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:49.737303   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:49.772577   58316 cri.go:89] found id: ""
	I0520 11:48:49.772601   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.772611   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:49.772617   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:49.772678   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:49.806926   58316 cri.go:89] found id: ""
	I0520 11:48:49.806954   58316 logs.go:276] 0 containers: []
	W0520 11:48:49.806966   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:49.806977   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:49.806991   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:49.861208   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:49.861242   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:49.875054   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:49.875084   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:49.949556   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:49.949582   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:49.949596   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:50.032181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:50.032226   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:48.055288   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:50.057220   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:48.362952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:50.363988   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.863467   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.385482   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:54.386207   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:52.578486   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:52.592465   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:52.592534   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:52.627795   58316 cri.go:89] found id: ""
	I0520 11:48:52.627828   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.627838   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:52.627845   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:52.627906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:52.663863   58316 cri.go:89] found id: ""
	I0520 11:48:52.663889   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.663896   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:52.663902   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:52.663956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:52.697912   58316 cri.go:89] found id: ""
	I0520 11:48:52.697937   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.697944   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:52.697950   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:52.698010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:52.751379   58316 cri.go:89] found id: ""
	I0520 11:48:52.751401   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.751408   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:52.751413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:52.751473   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:52.786683   58316 cri.go:89] found id: ""
	I0520 11:48:52.786709   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.786716   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:52.786721   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:52.786782   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:52.821720   58316 cri.go:89] found id: ""
	I0520 11:48:52.821747   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.821758   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:52.821765   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:52.821828   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:52.858569   58316 cri.go:89] found id: ""
	I0520 11:48:52.858596   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.858607   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:52.858615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:52.858671   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:52.897524   58316 cri.go:89] found id: ""
	I0520 11:48:52.897551   58316 logs.go:276] 0 containers: []
	W0520 11:48:52.897560   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:52.897569   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:52.897584   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:52.977792   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:52.977816   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:52.977831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:53.064964   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:53.065009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:53.108613   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:53.108649   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:53.164692   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:53.164728   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:55.679593   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:55.693505   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:55.693576   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:55.730013   58316 cri.go:89] found id: ""
	I0520 11:48:55.730043   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.730053   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:55.730061   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:55.730121   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:55.769739   58316 cri.go:89] found id: ""
	I0520 11:48:55.769769   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.769780   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:55.769787   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:55.769842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:55.809831   58316 cri.go:89] found id: ""
	I0520 11:48:55.809867   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.809878   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:55.809886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:55.809946   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:55.843487   58316 cri.go:89] found id: ""
	I0520 11:48:55.843513   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.843521   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:55.843527   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:55.843586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:55.883327   58316 cri.go:89] found id: ""
	I0520 11:48:55.883352   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.883362   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:55.883372   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:55.883430   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:55.921507   58316 cri.go:89] found id: ""
	I0520 11:48:55.921534   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.921544   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:55.921552   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:55.921612   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:55.957042   58316 cri.go:89] found id: ""
	I0520 11:48:55.957066   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.957073   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:55.957079   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:55.957141   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:55.990871   58316 cri.go:89] found id: ""
	I0520 11:48:55.990899   58316 logs.go:276] 0 containers: []
	W0520 11:48:55.990907   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:55.990916   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:55.990931   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:56.041345   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:56.041373   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:56.055628   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:56.055653   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:56.128845   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:56.128867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:56.128883   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:56.218181   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:56.218215   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:52.554824   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:54.557133   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:55.362033   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:57.362624   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:56.885083   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:58.885662   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:58.760650   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:48:58.774991   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:48:58.775048   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:48:58.817777   58316 cri.go:89] found id: ""
	I0520 11:48:58.817807   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.817819   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:48:58.817826   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:48:58.817885   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:48:58.855909   58316 cri.go:89] found id: ""
	I0520 11:48:58.855934   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.855942   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:48:58.855949   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:48:58.856010   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:48:58.893369   58316 cri.go:89] found id: ""
	I0520 11:48:58.893396   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.893405   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:48:58.893413   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:48:58.893475   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:48:58.929975   58316 cri.go:89] found id: ""
	I0520 11:48:58.930001   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.930010   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:48:58.930017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:48:58.930067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:48:58.963865   58316 cri.go:89] found id: ""
	I0520 11:48:58.963890   58316 logs.go:276] 0 containers: []
	W0520 11:48:58.963898   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:48:58.963904   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:48:58.963954   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:48:59.000604   58316 cri.go:89] found id: ""
	I0520 11:48:59.000624   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.000631   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:48:59.000637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:48:59.000685   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:48:59.037408   58316 cri.go:89] found id: ""
	I0520 11:48:59.037433   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.037442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:48:59.037452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:48:59.037512   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:48:59.073087   58316 cri.go:89] found id: ""
	I0520 11:48:59.073109   58316 logs.go:276] 0 containers: []
	W0520 11:48:59.073118   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:48:59.073126   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:48:59.073142   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:48:59.124885   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:48:59.124916   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:48:59.154020   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:48:59.154047   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:48:59.277149   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:48:59.277199   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:48:59.277216   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:48:59.363126   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:48:59.363157   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:48:57.055207   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:59.055409   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.554795   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:48:59.362869   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.363107   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.385471   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:03.386313   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:05.386387   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:01.906903   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:01.923346   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:01.923407   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:01.961612   58316 cri.go:89] found id: ""
	I0520 11:49:01.961635   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.961643   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:01.961648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:01.961702   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:01.997803   58316 cri.go:89] found id: ""
	I0520 11:49:01.997830   58316 logs.go:276] 0 containers: []
	W0520 11:49:01.997839   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:01.997844   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:01.997894   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:02.032142   58316 cri.go:89] found id: ""
	I0520 11:49:02.032169   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.032177   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:02.032182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:02.032226   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:02.070641   58316 cri.go:89] found id: ""
	I0520 11:49:02.070664   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.070671   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:02.070676   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:02.070729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:02.107270   58316 cri.go:89] found id: ""
	I0520 11:49:02.107296   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.107302   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:02.107308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:02.107356   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:02.144676   58316 cri.go:89] found id: ""
	I0520 11:49:02.144702   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.144709   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:02.144717   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:02.144798   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:02.184382   58316 cri.go:89] found id: ""
	I0520 11:49:02.184403   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.184410   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:02.184416   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:02.184466   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:02.223110   58316 cri.go:89] found id: ""
	I0520 11:49:02.223140   58316 logs.go:276] 0 containers: []
	W0520 11:49:02.223152   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:02.223164   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:02.223181   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:02.236272   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:02.236305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:02.312046   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:02.312065   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:02.312076   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:02.397284   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:02.397313   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:02.438720   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:02.438755   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:04.988565   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:05.005842   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:05.005906   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:05.071468   58316 cri.go:89] found id: ""
	I0520 11:49:05.071496   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.071505   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:05.071515   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:05.071574   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:05.125767   58316 cri.go:89] found id: ""
	I0520 11:49:05.125793   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.125801   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:05.125806   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:05.125868   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:05.162651   58316 cri.go:89] found id: ""
	I0520 11:49:05.162676   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.162684   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:05.162689   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:05.162746   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:05.205140   58316 cri.go:89] found id: ""
	I0520 11:49:05.205168   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.205176   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:05.205182   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:05.205245   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:05.245187   58316 cri.go:89] found id: ""
	I0520 11:49:05.245209   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.245216   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:05.245221   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:05.245280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:05.287698   58316 cri.go:89] found id: ""
	I0520 11:49:05.287728   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.287739   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:05.287746   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:05.287809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:05.325607   58316 cri.go:89] found id: ""
	I0520 11:49:05.325637   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.325649   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:05.325655   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:05.325716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:05.360662   58316 cri.go:89] found id: ""
	I0520 11:49:05.360690   58316 logs.go:276] 0 containers: []
	W0520 11:49:05.360700   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:05.360711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:05.360725   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:05.402342   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:05.402376   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:05.454403   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:05.454436   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:05.468882   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:05.468912   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:05.546520   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:05.546546   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:05.546567   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:03.555624   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:06.055651   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:03.862582   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:05.862917   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:07.863213   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:07.886494   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.384808   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:08.128785   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:08.144963   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:08.145053   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:08.190121   58316 cri.go:89] found id: ""
	I0520 11:49:08.190154   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.190165   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:08.190174   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:08.190253   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:08.227310   58316 cri.go:89] found id: ""
	I0520 11:49:08.227334   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.227341   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:08.227346   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:08.227390   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:08.262606   58316 cri.go:89] found id: ""
	I0520 11:49:08.262631   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.262639   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:08.262646   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:08.262706   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:08.297061   58316 cri.go:89] found id: ""
	I0520 11:49:08.297092   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.297103   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:08.297110   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:08.297170   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:08.337356   58316 cri.go:89] found id: ""
	I0520 11:49:08.337394   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.337410   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:08.337418   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:08.337481   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:08.372576   58316 cri.go:89] found id: ""
	I0520 11:49:08.372603   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.372613   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:08.372620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:08.372677   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:08.409394   58316 cri.go:89] found id: ""
	I0520 11:49:08.409427   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.409442   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:08.409449   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:08.409562   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:08.445119   58316 cri.go:89] found id: ""
	I0520 11:49:08.445145   58316 logs.go:276] 0 containers: []
	W0520 11:49:08.445153   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:08.445161   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:08.445171   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:08.457873   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:08.457899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:08.530873   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:08.530902   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:08.530920   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.609655   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:08.609695   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:08.656568   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:08.656601   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.209318   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:11.223883   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:11.223958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:11.260006   58316 cri.go:89] found id: ""
	I0520 11:49:11.260047   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.260060   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:11.260068   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:11.260129   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:11.296348   58316 cri.go:89] found id: ""
	I0520 11:49:11.296375   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.296385   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:11.296392   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:11.296452   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:11.336546   58316 cri.go:89] found id: ""
	I0520 11:49:11.336573   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.336583   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:11.336590   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:11.336652   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:11.373216   58316 cri.go:89] found id: ""
	I0520 11:49:11.373244   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.373254   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:11.373261   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:11.373320   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:11.411555   58316 cri.go:89] found id: ""
	I0520 11:49:11.411586   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.411596   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:11.411604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:11.411663   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:11.446390   58316 cri.go:89] found id: ""
	I0520 11:49:11.446419   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.446428   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:11.446433   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:11.446489   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:11.481584   58316 cri.go:89] found id: ""
	I0520 11:49:11.481613   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.481624   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:11.481637   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:11.481701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:11.518092   58316 cri.go:89] found id: ""
	I0520 11:49:11.518121   58316 logs.go:276] 0 containers: []
	W0520 11:49:11.518131   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:11.518143   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:11.518158   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:11.569824   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:11.569859   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:11.583600   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:11.583620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:11.663761   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:11.663799   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:11.663817   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:08.055875   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.056882   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:10.362663   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:12.863246   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:12.388256   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:14.887822   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:11.743430   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:11.743462   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:14.289940   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:14.307574   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:14.307645   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:14.349602   58316 cri.go:89] found id: ""
	I0520 11:49:14.349628   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.349637   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:14.349644   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:14.349701   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:14.391484   58316 cri.go:89] found id: ""
	I0520 11:49:14.391508   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.391516   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:14.391521   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:14.391581   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:14.437859   58316 cri.go:89] found id: ""
	I0520 11:49:14.437884   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.437894   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:14.437902   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:14.437965   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:14.474353   58316 cri.go:89] found id: ""
	I0520 11:49:14.474379   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.474387   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:14.474392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:14.474439   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:14.517430   58316 cri.go:89] found id: ""
	I0520 11:49:14.517459   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.517466   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:14.517472   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:14.517523   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:14.552166   58316 cri.go:89] found id: ""
	I0520 11:49:14.552196   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.552207   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:14.552220   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:14.552280   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:14.589295   58316 cri.go:89] found id: ""
	I0520 11:49:14.589322   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.589330   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:14.589336   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:14.589396   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:14.626221   58316 cri.go:89] found id: ""
	I0520 11:49:14.626244   58316 logs.go:276] 0 containers: []
	W0520 11:49:14.626251   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:14.626259   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:14.626271   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:14.681867   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:14.681899   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:14.695328   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:14.695355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:14.773077   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:14.773102   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:14.773118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:14.857344   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:14.857378   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:12.557989   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:15.056569   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:14.864153   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:17.362222   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:16.888363   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.385523   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:17.402259   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:17.415895   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:17.415956   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:17.458403   58316 cri.go:89] found id: ""
	I0520 11:49:17.458432   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.458443   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:17.458450   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:17.458511   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:17.495603   58316 cri.go:89] found id: ""
	I0520 11:49:17.495633   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.495650   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:17.495657   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:17.495716   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:17.535416   58316 cri.go:89] found id: ""
	I0520 11:49:17.535440   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.535448   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:17.535452   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:17.535515   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:17.574842   58316 cri.go:89] found id: ""
	I0520 11:49:17.574868   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.574875   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:17.574880   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:17.574933   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:17.608744   58316 cri.go:89] found id: ""
	I0520 11:49:17.608773   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.608783   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:17.608791   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:17.608864   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:17.645101   58316 cri.go:89] found id: ""
	I0520 11:49:17.645126   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.645134   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:17.645140   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:17.645199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:17.682620   58316 cri.go:89] found id: ""
	I0520 11:49:17.682651   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.682661   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:17.682669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:17.682725   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:17.718392   58316 cri.go:89] found id: ""
	I0520 11:49:17.718414   58316 logs.go:276] 0 containers: []
	W0520 11:49:17.718422   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:17.718430   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:17.718441   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:17.772103   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:17.772138   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:17.786071   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:17.786102   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:17.859596   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:17.859617   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:17.859634   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:17.938723   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:17.938754   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:20.480539   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:20.493677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:20.493745   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:20.528213   58316 cri.go:89] found id: ""
	I0520 11:49:20.528241   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.528251   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:20.528258   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:20.528344   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:20.564821   58316 cri.go:89] found id: ""
	I0520 11:49:20.564844   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.564853   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:20.564860   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:20.564921   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:20.599015   58316 cri.go:89] found id: ""
	I0520 11:49:20.599051   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.599063   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:20.599070   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:20.599125   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:20.637115   58316 cri.go:89] found id: ""
	I0520 11:49:20.637160   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.637169   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:20.637175   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:20.637221   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:20.672699   58316 cri.go:89] found id: ""
	I0520 11:49:20.672728   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.672738   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:20.672745   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:20.672809   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:20.709278   58316 cri.go:89] found id: ""
	I0520 11:49:20.709309   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.709321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:20.709327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:20.709394   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:20.743159   58316 cri.go:89] found id: ""
	I0520 11:49:20.743182   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.743188   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:20.743194   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:20.743239   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:20.780848   58316 cri.go:89] found id: ""
	I0520 11:49:20.780878   58316 logs.go:276] 0 containers: []
	W0520 11:49:20.780890   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:20.780902   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:20.780917   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:20.834178   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:20.834206   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:20.848026   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:20.848049   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:20.926067   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:20.926088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:20.926103   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:21.003153   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:21.003184   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:17.056820   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.556351   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:19.362408   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:21.861822   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:21.386338   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.390122   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:25.885748   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.545830   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:23.560697   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:23.560781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:23.595691   58316 cri.go:89] found id: ""
	I0520 11:49:23.595719   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.595729   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:23.595737   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:23.595805   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:23.634526   58316 cri.go:89] found id: ""
	I0520 11:49:23.634555   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.634565   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:23.634573   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:23.634631   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:23.670101   58316 cri.go:89] found id: ""
	I0520 11:49:23.670144   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.670155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:23.670169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:23.670228   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:23.704352   58316 cri.go:89] found id: ""
	I0520 11:49:23.704376   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.704384   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:23.704389   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:23.704435   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:23.745633   58316 cri.go:89] found id: ""
	I0520 11:49:23.745657   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.745664   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:23.745669   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:23.745729   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:23.784631   58316 cri.go:89] found id: ""
	I0520 11:49:23.784659   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.784671   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:23.784678   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:23.784742   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:23.819884   58316 cri.go:89] found id: ""
	I0520 11:49:23.819911   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.819918   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:23.819924   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:23.819976   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:23.857396   58316 cri.go:89] found id: ""
	I0520 11:49:23.857427   58316 logs.go:276] 0 containers: []
	W0520 11:49:23.857436   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:23.857444   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:23.857455   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:23.931405   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:23.931428   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:23.931445   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:24.004717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:24.004752   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:24.047465   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:24.047499   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:24.098918   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:24.098950   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:26.613820   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:26.627313   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:26.627386   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:26.661418   58316 cri.go:89] found id: ""
	I0520 11:49:26.661450   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.661460   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:26.661467   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:26.661532   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:22.057526   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:24.554983   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:23.864449   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:26.363217   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:28.384733   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:30.884703   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:26.697938   58316 cri.go:89] found id: ""
	I0520 11:49:26.699437   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.699450   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:26.699456   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:26.699506   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:26.735017   58316 cri.go:89] found id: ""
	I0520 11:49:26.735053   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.735062   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:26.735066   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:26.735115   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:26.771318   58316 cri.go:89] found id: ""
	I0520 11:49:26.771345   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.771355   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:26.771360   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:26.771410   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:26.806365   58316 cri.go:89] found id: ""
	I0520 11:49:26.806396   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.806408   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:26.806415   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:26.806468   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:26.844289   58316 cri.go:89] found id: ""
	I0520 11:49:26.844313   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.844321   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:26.844327   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:26.844373   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:26.883496   58316 cri.go:89] found id: ""
	I0520 11:49:26.883523   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.883534   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:26.883541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:26.883600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:26.918427   58316 cri.go:89] found id: ""
	I0520 11:49:26.918451   58316 logs.go:276] 0 containers: []
	W0520 11:49:26.918458   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:26.918465   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:26.918477   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:26.986423   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:26.986444   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:26.986457   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:27.066909   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:27.066938   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.110874   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:27.110911   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:27.167481   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:27.167520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:29.682559   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:29.695460   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:29.695520   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:29.731189   58316 cri.go:89] found id: ""
	I0520 11:49:29.731218   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.731230   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:29.731238   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:29.731297   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:29.766067   58316 cri.go:89] found id: ""
	I0520 11:49:29.766109   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.766120   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:29.766131   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:29.766200   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:29.800840   58316 cri.go:89] found id: ""
	I0520 11:49:29.800868   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.800879   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:29.800886   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:29.800959   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:29.838971   58316 cri.go:89] found id: ""
	I0520 11:49:29.838999   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.839009   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:29.839017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:29.839078   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:29.873954   58316 cri.go:89] found id: ""
	I0520 11:49:29.873977   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.873987   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:29.873994   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:29.874051   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:29.912574   58316 cri.go:89] found id: ""
	I0520 11:49:29.912602   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.912612   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:29.912620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:29.912687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:29.945949   58316 cri.go:89] found id: ""
	I0520 11:49:29.945980   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.945989   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:29.945995   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:29.946043   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:29.979804   58316 cri.go:89] found id: ""
	I0520 11:49:29.979830   58316 logs.go:276] 0 containers: []
	W0520 11:49:29.979847   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:29.979856   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:29.979868   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:30.027811   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:30.027844   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:30.043082   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:30.043115   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:30.114796   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:30.114827   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:30.114842   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:30.196523   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:30.196561   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:27.056582   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:29.057033   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:31.554824   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:28.862736   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:31.362469   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:32.885757   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.385655   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:32.737682   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:32.750903   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:32.750979   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:32.786874   58316 cri.go:89] found id: ""
	I0520 11:49:32.786897   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.786907   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:32.786915   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:32.786968   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:32.821758   58316 cri.go:89] found id: ""
	I0520 11:49:32.821783   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.821794   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:32.821800   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:32.821855   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:32.857680   58316 cri.go:89] found id: ""
	I0520 11:49:32.857700   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.857707   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:32.857712   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:32.857756   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:32.897970   58316 cri.go:89] found id: ""
	I0520 11:49:32.897990   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.897999   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:32.898004   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:32.898059   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:32.932971   58316 cri.go:89] found id: ""
	I0520 11:49:32.932999   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.933007   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:32.933015   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:32.933067   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:32.968103   58316 cri.go:89] found id: ""
	I0520 11:49:32.968132   58316 logs.go:276] 0 containers: []
	W0520 11:49:32.968140   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:32.968147   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:32.968199   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:33.006101   58316 cri.go:89] found id: ""
	I0520 11:49:33.006125   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.006132   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:33.006137   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:33.006184   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:33.043611   58316 cri.go:89] found id: ""
	I0520 11:49:33.043639   58316 logs.go:276] 0 containers: []
	W0520 11:49:33.043650   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:33.043659   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:33.043677   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:33.119735   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.119758   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:33.119773   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:33.199361   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:33.199408   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:33.263421   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:33.263452   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:33.315644   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:33.315674   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:35.831034   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:35.845272   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:35.845343   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:35.882951   58316 cri.go:89] found id: ""
	I0520 11:49:35.882980   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.882988   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:35.882994   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:35.883042   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:35.919528   58316 cri.go:89] found id: ""
	I0520 11:49:35.919555   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.919564   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:35.919569   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:35.919618   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:35.957880   58316 cri.go:89] found id: ""
	I0520 11:49:35.957904   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.957912   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:35.957918   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:35.957964   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:35.995267   58316 cri.go:89] found id: ""
	I0520 11:49:35.995291   58316 logs.go:276] 0 containers: []
	W0520 11:49:35.995299   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:35.995308   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:35.995367   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:36.032979   58316 cri.go:89] found id: ""
	I0520 11:49:36.033011   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.033022   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:36.033030   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:36.033092   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:36.071270   58316 cri.go:89] found id: ""
	I0520 11:49:36.071293   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.071303   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:36.071311   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:36.071434   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:36.105983   58316 cri.go:89] found id: ""
	I0520 11:49:36.106014   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.106024   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:36.106032   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:36.106094   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:36.142978   58316 cri.go:89] found id: ""
	I0520 11:49:36.143006   58316 logs.go:276] 0 containers: []
	W0520 11:49:36.143026   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:36.143037   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:36.143061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:36.227019   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:36.227055   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:36.269608   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:36.269646   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:36.321974   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:36.322009   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:36.339332   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:36.339361   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:36.417118   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:33.555033   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.555749   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:33.363377   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:35.863335   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:37.885668   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.385866   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:38.918143   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:38.933253   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:38.933325   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:38.988894   58316 cri.go:89] found id: ""
	I0520 11:49:38.988940   58316 logs.go:276] 0 containers: []
	W0520 11:49:38.988952   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:38.988961   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:38.989030   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:39.025902   58316 cri.go:89] found id: ""
	I0520 11:49:39.025928   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.025938   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:39.025945   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:39.026007   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:39.061920   58316 cri.go:89] found id: ""
	I0520 11:49:39.061945   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.061954   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:39.061961   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:39.062018   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:39.098125   58316 cri.go:89] found id: ""
	I0520 11:49:39.098152   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.098162   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:39.098169   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:39.098229   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:39.139286   58316 cri.go:89] found id: ""
	I0520 11:49:39.139318   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.139328   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:39.139335   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:39.139404   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:39.174437   58316 cri.go:89] found id: ""
	I0520 11:49:39.174468   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.174477   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:39.174485   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:39.174551   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:39.216176   58316 cri.go:89] found id: ""
	I0520 11:49:39.216211   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.216221   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:39.216228   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:39.216290   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:39.254213   58316 cri.go:89] found id: ""
	I0520 11:49:39.254235   58316 logs.go:276] 0 containers: []
	W0520 11:49:39.254243   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:39.254252   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:39.254263   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:39.334325   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:39.334350   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:39.334367   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:39.411799   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:39.411830   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:39.455093   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:39.455118   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:39.507557   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:39.507594   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:37.559287   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.055915   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:38.362779   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:40.862212   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.862459   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.385950   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:44.387966   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:42.021921   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:42.035757   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:42.035829   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:42.072072   58316 cri.go:89] found id: ""
	I0520 11:49:42.072103   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.072112   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:42.072118   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:42.072180   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:42.108038   58316 cri.go:89] found id: ""
	I0520 11:49:42.108061   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.108068   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:42.108074   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:42.108130   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:42.146464   58316 cri.go:89] found id: ""
	I0520 11:49:42.146492   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.146503   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:42.146510   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:42.146578   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:42.182566   58316 cri.go:89] found id: ""
	I0520 11:49:42.182598   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.182608   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:42.182615   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:42.182688   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:42.220159   58316 cri.go:89] found id: ""
	I0520 11:49:42.220191   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.220201   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:42.220209   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:42.220268   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:42.255952   58316 cri.go:89] found id: ""
	I0520 11:49:42.255975   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.255982   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:42.255988   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:42.256036   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:42.294347   58316 cri.go:89] found id: ""
	I0520 11:49:42.294376   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.294387   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:42.294394   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:42.294456   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:42.332019   58316 cri.go:89] found id: ""
	I0520 11:49:42.332043   58316 logs.go:276] 0 containers: []
	W0520 11:49:42.332050   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:42.332058   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:42.332074   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.408479   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:42.408520   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:42.445579   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:42.445620   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:42.496278   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:42.496312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:42.510258   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:42.510292   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:42.582179   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.082674   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:45.096690   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:45.096749   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:45.132031   58316 cri.go:89] found id: ""
	I0520 11:49:45.132054   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.132068   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:45.132074   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:45.132126   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:45.167575   58316 cri.go:89] found id: ""
	I0520 11:49:45.167601   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.167612   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:45.167619   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:45.167679   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:45.202021   58316 cri.go:89] found id: ""
	I0520 11:49:45.202047   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.202054   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:45.202059   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:45.202113   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:45.236630   58316 cri.go:89] found id: ""
	I0520 11:49:45.236659   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.236669   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:45.236677   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:45.236740   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:45.272509   58316 cri.go:89] found id: ""
	I0520 11:49:45.272536   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.272543   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:45.272549   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:45.272606   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:45.308221   58316 cri.go:89] found id: ""
	I0520 11:49:45.308242   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.308250   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:45.308256   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:45.308298   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:45.344202   58316 cri.go:89] found id: ""
	I0520 11:49:45.344226   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.344233   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:45.344239   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:45.344299   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:45.392276   58316 cri.go:89] found id: ""
	I0520 11:49:45.392304   58316 logs.go:276] 0 containers: []
	W0520 11:49:45.392315   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:45.392325   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:45.392336   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:45.451325   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:45.451355   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:45.509835   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:45.509867   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:45.523790   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:45.523815   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:45.599837   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:45.599867   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:45.599886   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:42.056415   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:44.555806   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:45.363614   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:47.863671   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:46.884916   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:48.885117   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:48.175148   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:48.188998   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:48.189102   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:48.225492   58316 cri.go:89] found id: ""
	I0520 11:49:48.225528   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.225538   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:48.225548   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:48.225611   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:48.259197   58316 cri.go:89] found id: ""
	I0520 11:49:48.259221   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.259231   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:48.259237   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:48.259296   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:48.298760   58316 cri.go:89] found id: ""
	I0520 11:49:48.298786   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.298796   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:48.298803   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:48.298876   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:48.339188   58316 cri.go:89] found id: ""
	I0520 11:49:48.339217   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.339227   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:48.339234   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:48.339302   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:48.378659   58316 cri.go:89] found id: ""
	I0520 11:49:48.378689   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.378699   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:48.378706   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:48.378763   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:48.417637   58316 cri.go:89] found id: ""
	I0520 11:49:48.417664   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.417674   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:48.417681   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:48.417757   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:48.461412   58316 cri.go:89] found id: ""
	I0520 11:49:48.461439   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.461450   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:48.461457   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:48.461521   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:48.502919   58316 cri.go:89] found id: ""
	I0520 11:49:48.502942   58316 logs.go:276] 0 containers: []
	W0520 11:49:48.502949   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:48.502958   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:48.502974   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:48.557007   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:48.557037   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:48.571457   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:48.571485   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:48.642339   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:48.642362   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:48.642379   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:48.720711   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:48.720750   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:51.262685   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:51.276276   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:51.276352   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:51.313139   58316 cri.go:89] found id: ""
	I0520 11:49:51.313169   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.313178   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:51.313184   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:51.313233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:51.349742   58316 cri.go:89] found id: ""
	I0520 11:49:51.349771   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.349781   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:51.349789   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:51.349847   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:51.388819   58316 cri.go:89] found id: ""
	I0520 11:49:51.388856   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.388868   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:51.388875   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:51.388958   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:51.424734   58316 cri.go:89] found id: ""
	I0520 11:49:51.424769   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.424779   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:51.424789   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:51.424867   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:51.464135   58316 cri.go:89] found id: ""
	I0520 11:49:51.464163   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.464172   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:51.464178   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:51.464241   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:51.505948   58316 cri.go:89] found id: ""
	I0520 11:49:51.505974   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.505984   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:51.505992   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:51.506057   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:51.542350   58316 cri.go:89] found id: ""
	I0520 11:49:51.542378   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.542389   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:51.542397   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:51.542461   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:51.577991   58316 cri.go:89] found id: ""
	I0520 11:49:51.578022   58316 logs.go:276] 0 containers: []
	W0520 11:49:51.578031   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:51.578043   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:51.578058   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:51.630392   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:51.630427   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:51.645173   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:51.645198   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:49:47.056668   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:49.554453   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:51.555691   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:50.363307   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:52.862785   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:51.386280   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:53.386860   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:55.886047   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	W0520 11:49:51.715271   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:51.715297   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:51.715312   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:51.798792   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:51.798831   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.343631   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:54.358716   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:54.358791   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:54.394117   58316 cri.go:89] found id: ""
	I0520 11:49:54.394145   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.394154   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:54.394162   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:54.394219   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:54.435611   58316 cri.go:89] found id: ""
	I0520 11:49:54.435636   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.435644   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:54.435652   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:54.435711   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:54.469438   58316 cri.go:89] found id: ""
	I0520 11:49:54.469464   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.469474   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:54.469482   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:54.469544   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:54.508113   58316 cri.go:89] found id: ""
	I0520 11:49:54.508143   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.508154   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:54.508161   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:54.508223   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:54.544504   58316 cri.go:89] found id: ""
	I0520 11:49:54.544528   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.544536   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:54.544541   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:54.544598   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:54.584691   58316 cri.go:89] found id: ""
	I0520 11:49:54.584725   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.584735   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:54.584743   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:54.584804   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:54.618505   58316 cri.go:89] found id: ""
	I0520 11:49:54.618525   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.618532   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:54.618538   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:54.618589   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:54.652670   58316 cri.go:89] found id: ""
	I0520 11:49:54.652701   58316 logs.go:276] 0 containers: []
	W0520 11:49:54.652711   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:54.652721   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:54.652741   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:54.692782   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:54.692808   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:54.746300   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:54.746340   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:54.760324   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:54.760357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:54.835091   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:49:54.835118   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:54.835130   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:54.055131   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:56.055386   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:54.863545   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:57.363023   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:58.385457   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:00.385971   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:57.422176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:49:57.437369   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:49:57.437429   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:49:57.472083   58316 cri.go:89] found id: ""
	I0520 11:49:57.472119   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.472130   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:49:57.472137   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:49:57.472197   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:49:57.507307   58316 cri.go:89] found id: ""
	I0520 11:49:57.507332   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.507339   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:49:57.507345   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:49:57.507405   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:49:57.548112   58316 cri.go:89] found id: ""
	I0520 11:49:57.548144   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.548155   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:49:57.548162   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:49:57.548227   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:49:57.585751   58316 cri.go:89] found id: ""
	I0520 11:49:57.585790   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.585801   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:49:57.585809   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:49:57.585879   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:49:57.626577   58316 cri.go:89] found id: ""
	I0520 11:49:57.626609   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.626620   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:49:57.626628   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:49:57.626687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:49:57.662402   58316 cri.go:89] found id: ""
	I0520 11:49:57.662429   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.662436   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:49:57.662441   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:49:57.662503   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:49:57.695393   58316 cri.go:89] found id: ""
	I0520 11:49:57.695417   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.695424   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:49:57.695429   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:49:57.695487   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:49:57.730011   58316 cri.go:89] found id: ""
	I0520 11:49:57.730039   58316 logs.go:276] 0 containers: []
	W0520 11:49:57.730049   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:49:57.730059   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:49:57.730070   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:49:57.807727   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:49:57.807763   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:57.853562   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:49:57.853595   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:49:57.909322   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:49:57.909357   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:49:57.923277   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:49:57.923305   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:49:57.994858   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.495176   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:00.508821   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:00.508889   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:00.543001   58316 cri.go:89] found id: ""
	I0520 11:50:00.543028   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.543045   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:00.543057   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:00.543117   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:00.582547   58316 cri.go:89] found id: ""
	I0520 11:50:00.582574   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.582590   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:00.582598   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:00.582659   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:00.617962   58316 cri.go:89] found id: ""
	I0520 11:50:00.617993   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.618003   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:00.618011   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:00.618073   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:00.655065   58316 cri.go:89] found id: ""
	I0520 11:50:00.655099   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.655110   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:00.655117   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:00.655177   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:00.689497   58316 cri.go:89] found id: ""
	I0520 11:50:00.689521   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.689528   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:00.689533   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:00.689586   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:00.723710   58316 cri.go:89] found id: ""
	I0520 11:50:00.723743   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.723754   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:00.723761   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:00.723821   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:00.757900   58316 cri.go:89] found id: ""
	I0520 11:50:00.757943   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.757950   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:00.757956   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:00.758014   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:00.792196   58316 cri.go:89] found id: ""
	I0520 11:50:00.792223   58316 logs.go:276] 0 containers: []
	W0520 11:50:00.792232   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:00.792242   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:00.792257   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:00.845759   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:00.845791   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:00.860633   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:00.860658   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:00.936030   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:00.936050   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:00.936061   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:01.011562   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:01.011602   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:49:58.554785   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:00.555726   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:49:59.365132   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:01.863538   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:02.884908   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:05.384186   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:03.554533   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:03.568017   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:03.568086   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:03.605583   58316 cri.go:89] found id: ""
	I0520 11:50:03.605610   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.605619   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:03.605626   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:03.605683   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.640875   58316 cri.go:89] found id: ""
	I0520 11:50:03.640905   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.640915   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:03.640939   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:03.641004   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:03.676064   58316 cri.go:89] found id: ""
	I0520 11:50:03.676088   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.676096   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:03.676103   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:03.676155   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:03.710791   58316 cri.go:89] found id: ""
	I0520 11:50:03.710815   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.710823   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:03.710828   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:03.710883   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:03.746348   58316 cri.go:89] found id: ""
	I0520 11:50:03.746375   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.746386   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:03.746392   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:03.746451   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:03.782220   58316 cri.go:89] found id: ""
	I0520 11:50:03.782244   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.782253   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:03.782259   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:03.782319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:03.819678   58316 cri.go:89] found id: ""
	I0520 11:50:03.819712   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.819724   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:03.819731   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:03.819789   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:03.856137   58316 cri.go:89] found id: ""
	I0520 11:50:03.856163   58316 logs.go:276] 0 containers: []
	W0520 11:50:03.856172   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:03.856180   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:03.856191   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:03.914360   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:03.914391   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:03.929401   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:03.929431   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:03.999763   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:03.999882   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:03.999905   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:04.076921   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:04.076971   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:06.623577   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:06.636265   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:06.636329   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:06.670653   58316 cri.go:89] found id: ""
	I0520 11:50:06.670682   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.670692   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:06.670698   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:06.670790   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:03.054799   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:05.056653   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:03.865619   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:06.362355   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:07.387760   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:09.885607   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:06.703498   58316 cri.go:89] found id: ""
	I0520 11:50:06.703528   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.703537   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:06.703543   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:06.703600   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:06.736579   58316 cri.go:89] found id: ""
	I0520 11:50:06.736606   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.736615   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:06.736620   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:06.736687   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:06.774584   58316 cri.go:89] found id: ""
	I0520 11:50:06.774608   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.774615   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:06.774625   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:06.774682   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:06.810565   58316 cri.go:89] found id: ""
	I0520 11:50:06.810591   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.810599   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:06.810604   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:06.810666   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:06.845063   58316 cri.go:89] found id: ""
	I0520 11:50:06.845101   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.845113   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:06.845121   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:06.845208   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:06.884560   58316 cri.go:89] found id: ""
	I0520 11:50:06.884585   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.884592   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:06.884597   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:06.884650   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:06.923422   58316 cri.go:89] found id: ""
	I0520 11:50:06.923450   58316 logs.go:276] 0 containers: []
	W0520 11:50:06.923461   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:06.923474   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:06.923490   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:06.937831   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:06.937856   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:07.008877   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.008897   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:07.008907   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:07.091174   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:07.091208   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:07.128889   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:07.128922   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:09.686881   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:09.700771   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:09.700842   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:09.737129   58316 cri.go:89] found id: ""
	I0520 11:50:09.737156   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.737166   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:50:09.737172   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:09.737233   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:09.772130   58316 cri.go:89] found id: ""
	I0520 11:50:09.772164   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.772175   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:50:09.772182   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:09.772251   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:09.811033   58316 cri.go:89] found id: ""
	I0520 11:50:09.811059   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.811066   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:50:09.811072   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:09.811127   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:09.849809   58316 cri.go:89] found id: ""
	I0520 11:50:09.849833   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.849841   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:50:09.849849   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:09.849911   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:09.888216   58316 cri.go:89] found id: ""
	I0520 11:50:09.888241   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.888251   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:50:09.888258   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:09.888319   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:09.925704   58316 cri.go:89] found id: ""
	I0520 11:50:09.925733   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.925742   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:50:09.925749   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:09.925807   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:09.961823   58316 cri.go:89] found id: ""
	I0520 11:50:09.961846   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.961853   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:09.961858   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:50:09.961913   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:50:09.998256   58316 cri.go:89] found id: ""
	I0520 11:50:09.998280   58316 logs.go:276] 0 containers: []
	W0520 11:50:09.998288   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:50:09.998304   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:09.998319   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:10.077998   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:50:10.078041   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:10.116358   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:10.116383   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:10.169638   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:10.169670   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:10.184056   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:10.184082   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:50:10.256798   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:50:07.555902   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:09.558035   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:08.366901   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:10.862545   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:12.862664   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:12.757663   58316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:12.772331   58316 kubeadm.go:591] duration metric: took 4m3.848225092s to restartPrimaryControlPlane
	W0520 11:50:12.772412   58316 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:50:12.772450   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:50:13.531871   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:50:13.547475   58316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:50:13.558690   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:50:13.568708   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:50:13.568730   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:50:13.568772   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:50:13.579016   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:50:13.579078   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:50:13.590821   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:50:13.600567   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:50:13.600614   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:50:13.610747   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.619942   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:50:13.619981   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:50:13.629831   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:50:13.638940   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:50:13.638991   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:50:13.648962   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:50:13.714817   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:50:13.714873   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:50:13.872737   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:50:13.872919   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:50:13.873055   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:50:14.078057   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:50:14.080202   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:50:14.080339   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:50:14.080431   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:50:14.080563   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:50:14.080660   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:50:14.080763   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:50:14.080842   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:50:14.081017   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:50:14.081620   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:50:14.081988   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:50:14.082494   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:50:14.082552   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:50:14.082602   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:50:14.157464   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:50:14.556498   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:50:14.754304   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:50:15.061126   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:50:15.079978   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:50:15.081069   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:50:15.081248   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:50:15.222910   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:50:12.385235   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.886286   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:15.225078   58316 out.go:204]   - Booting up control plane ...
	I0520 11:50:15.225186   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:50:15.232621   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:50:15.233678   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:50:15.234394   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:50:15.236632   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:50:12.055821   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.556030   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:14.865340   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.362603   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.388804   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.885030   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:17.056232   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.555242   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.555339   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:19.363446   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.861989   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:21.886042   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:24.384353   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:23.555974   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:26.054899   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:23.862506   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:25.862952   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:26.384558   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.384818   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.387241   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.055327   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.055628   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:28.365187   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:30.862199   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.863892   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.885393   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.384549   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:32.555087   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.055509   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:35.362103   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.362430   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.386206   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.386258   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:37.055828   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.554574   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.554821   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:39.862750   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.862973   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:41.885277   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:43.886162   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:43.555316   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:45.556417   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:44.364091   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:46.364286   58398 pod_ready.go:102] pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:46.385011   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.385754   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:50.387956   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.055237   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:50.555567   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:48.362404   58398 pod_ready.go:81] duration metric: took 4m0.006145654s for pod "metrics-server-569cc877fc-ll4n8" in "kube-system" namespace to be "Ready" ...
	E0520 11:50:48.362424   58398 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:50:48.362431   58398 pod_ready.go:38] duration metric: took 4m3.605442277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:50:48.362444   58398 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:50:48.362469   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:48.362528   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:48.422343   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:48.422361   58398 cri.go:89] found id: ""
	I0520 11:50:48.422369   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:48.422428   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.427448   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:48.427512   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:48.477327   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:48.477353   58398 cri.go:89] found id: ""
	I0520 11:50:48.477361   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:48.477418   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.482113   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:48.482161   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:48.525019   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:48.525039   58398 cri.go:89] found id: ""
	I0520 11:50:48.525046   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:48.525087   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.529544   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:48.529616   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:48.571031   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:48.571055   58398 cri.go:89] found id: ""
	I0520 11:50:48.571065   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:48.571117   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.575601   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:48.575653   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:48.617086   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:48.617111   58398 cri.go:89] found id: ""
	I0520 11:50:48.617120   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:48.617174   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.623512   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:48.623585   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:48.660141   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:48.660170   58398 cri.go:89] found id: ""
	I0520 11:50:48.660179   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:48.660236   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.665578   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:48.665648   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:48.704366   58398 cri.go:89] found id: ""
	I0520 11:50:48.704398   58398 logs.go:276] 0 containers: []
	W0520 11:50:48.704408   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:48.704418   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:48.704482   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:48.745897   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:48.745925   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:48.745931   58398 cri.go:89] found id: ""
	I0520 11:50:48.745940   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:48.745997   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.750982   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:48.755064   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:48.755084   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:48.804692   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:48.804720   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:48.862735   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:48.862765   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:48.914854   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:48.914885   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:48.930449   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:48.930473   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:48.980371   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:48.980398   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:49.019847   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:49.019871   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:49.066103   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:49.066131   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:49.102912   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:49.102938   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:49.148380   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:49.148403   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:49.184436   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:49.184466   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:49.658226   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:49.658267   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:49.715674   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:49.715724   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:52.360868   58398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:50:52.378573   58398 api_server.go:72] duration metric: took 4m14.818629715s to wait for apiserver process to appear ...
	I0520 11:50:52.378595   58398 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:50:52.378634   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:52.378694   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:52.423033   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:52.423057   58398 cri.go:89] found id: ""
	I0520 11:50:52.423067   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:52.423124   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.427361   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:52.427422   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:52.472357   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:52.472383   58398 cri.go:89] found id: ""
	I0520 11:50:52.472393   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:52.472452   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.477472   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:52.477536   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:52.517721   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:52.517742   58398 cri.go:89] found id: ""
	I0520 11:50:52.517750   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:52.517806   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.522347   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:52.522415   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:52.562720   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:52.562747   58398 cri.go:89] found id: ""
	I0520 11:50:52.562757   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:52.562815   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.567322   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:52.567378   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:52.605752   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:52.605779   58398 cri.go:89] found id: ""
	I0520 11:50:52.605787   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:52.605841   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.609932   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:52.609996   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:52.646704   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:52.646734   58398 cri.go:89] found id: ""
	I0520 11:50:52.646745   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:52.646805   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.651422   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:52.651493   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:52.690880   58398 cri.go:89] found id: ""
	I0520 11:50:52.690909   58398 logs.go:276] 0 containers: []
	W0520 11:50:52.690919   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:52.690927   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:52.690997   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:52.733525   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:52.733558   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:52.733562   58398 cri.go:89] found id: ""
	I0520 11:50:52.733568   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:52.733615   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.738004   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:52.743086   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:52.743108   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:52.795042   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:52.795078   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:52.809356   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:52.809384   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:52.854486   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:52.854517   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:52.904266   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:52.904289   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:52.941257   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:52.941285   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:52.886142   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.385782   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.237878   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:50:55.238408   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:50:55.238664   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:50:53.057541   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:55.554909   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:52.978583   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:52.978613   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:53.081246   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:53.081276   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:53.124472   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:53.124509   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:53.167027   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:53.167055   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:53.223091   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:53.223125   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:53.261390   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:53.261417   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:53.680820   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:53.680877   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:56.227910   58398 api_server.go:253] Checking apiserver healthz at https://192.168.50.167:8443/healthz ...
	I0520 11:50:56.233856   58398 api_server.go:279] https://192.168.50.167:8443/healthz returned 200:
	ok
	I0520 11:50:56.234827   58398 api_server.go:141] control plane version: v1.30.1
	I0520 11:50:56.234852   58398 api_server.go:131] duration metric: took 3.856250847s to wait for apiserver health ...
	I0520 11:50:56.234859   58398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:50:56.234880   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:50:56.234923   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:50:56.273520   58398 cri.go:89] found id: "4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:56.273539   58398 cri.go:89] found id: ""
	I0520 11:50:56.273545   58398 logs.go:276] 1 containers: [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677]
	I0520 11:50:56.273589   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.278121   58398 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:50:56.278181   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:50:56.311934   58398 cri.go:89] found id: "e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:56.311958   58398 cri.go:89] found id: ""
	I0520 11:50:56.311966   58398 logs.go:276] 1 containers: [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577]
	I0520 11:50:56.312011   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.315972   58398 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:50:56.316033   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:50:56.351237   58398 cri.go:89] found id: "618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:56.351259   58398 cri.go:89] found id: ""
	I0520 11:50:56.351268   58398 logs.go:276] 1 containers: [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f]
	I0520 11:50:56.351323   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.355480   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:50:56.355528   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:50:56.389768   58398 cri.go:89] found id: "c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:56.389782   58398 cri.go:89] found id: ""
	I0520 11:50:56.389789   58398 logs.go:276] 1 containers: [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d]
	I0520 11:50:56.389829   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.393815   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:50:56.393869   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:50:56.436557   58398 cri.go:89] found id: "11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:56.436582   58398 cri.go:89] found id: ""
	I0520 11:50:56.436592   58398 logs.go:276] 1 containers: [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de]
	I0520 11:50:56.436637   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.440866   58398 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:50:56.440943   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:50:56.480688   58398 cri.go:89] found id: "f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:56.480734   58398 cri.go:89] found id: ""
	I0520 11:50:56.480744   58398 logs.go:276] 1 containers: [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39]
	I0520 11:50:56.480814   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.485880   58398 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:50:56.485974   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:50:56.521965   58398 cri.go:89] found id: ""
	I0520 11:50:56.521992   58398 logs.go:276] 0 containers: []
	W0520 11:50:56.521999   58398 logs.go:278] No container was found matching "kindnet"
	I0520 11:50:56.522005   58398 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:50:56.522063   58398 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:50:56.567585   58398 cri.go:89] found id: "aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:56.567611   58398 cri.go:89] found id: "d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:56.567617   58398 cri.go:89] found id: ""
	I0520 11:50:56.567627   58398 logs.go:276] 2 containers: [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e]
	I0520 11:50:56.567677   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.572972   58398 ssh_runner.go:195] Run: which crictl
	I0520 11:50:56.577459   58398 logs.go:123] Gathering logs for kube-controller-manager [f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39] ...
	I0520 11:50:56.577484   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8d4797eeb87b6dce1b38e9be8b1e74d1bfd344d7a1783bfda0594fb165d9b39"
	I0520 11:50:56.636435   58398 logs.go:123] Gathering logs for storage-provisioner [aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d] ...
	I0520 11:50:56.636467   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa00e0ad2982a53f33015461b93aacb54b81ff67e186c1141ee3d91e7da1f92d"
	I0520 11:50:56.675690   58398 logs.go:123] Gathering logs for etcd [e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577] ...
	I0520 11:50:56.675714   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ebf27b170f98a222ea860c20862d1fdc1e6ea42c6a6ff5feed1e3e9a30d577"
	I0520 11:50:56.720035   58398 logs.go:123] Gathering logs for dmesg ...
	I0520 11:50:56.720065   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:50:56.734275   58398 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:50:56.734302   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:50:56.837237   58398 logs.go:123] Gathering logs for kube-apiserver [4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677] ...
	I0520 11:50:56.837268   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bac0296aa85f09489e9916b046ad2caee132e3e4f8e70a59423b61cad04e677"
	I0520 11:50:56.885482   58398 logs.go:123] Gathering logs for coredns [618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f] ...
	I0520 11:50:56.885517   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 618ca565a445701a38438bd0a2e62c73ccdd964e590346b21a2549a1905d374f"
	I0520 11:50:56.922276   58398 logs.go:123] Gathering logs for kube-scheduler [c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d] ...
	I0520 11:50:56.922302   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c19fee6b7c2faad708f1c3c06af6a7d92320899c10714ad322c9c1311122d29d"
	I0520 11:50:56.971119   58398 logs.go:123] Gathering logs for kube-proxy [11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de] ...
	I0520 11:50:56.971148   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11f7139d84207be1a6cfaca995e4d68bf9b5bc39e8eedfe5e10b6ec0a6aca4de"
	I0520 11:50:57.010383   58398 logs.go:123] Gathering logs for storage-provisioner [d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e] ...
	I0520 11:50:57.010414   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0a1a121eec8a814f3fd4ae9e73d3f149b198ee8fbc66fe154928194aa35ab1e"
	I0520 11:50:57.044880   58398 logs.go:123] Gathering logs for kubelet ...
	I0520 11:50:57.044909   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:50:57.097906   58398 logs.go:123] Gathering logs for container status ...
	I0520 11:50:57.097936   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:50:57.147600   58398 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:50:57.147626   58398 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:50:59.998576   58398 system_pods.go:59] 8 kube-system pods found
	I0520 11:50:59.998601   58398 system_pods.go:61] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running
	I0520 11:50:59.998605   58398 system_pods.go:61] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running
	I0520 11:50:59.998609   58398 system_pods.go:61] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running
	I0520 11:50:59.998613   58398 system_pods.go:61] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running
	I0520 11:50:59.998617   58398 system_pods.go:61] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running
	I0520 11:50:59.998621   58398 system_pods.go:61] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running
	I0520 11:50:59.998627   58398 system_pods.go:61] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:50:59.998631   58398 system_pods.go:61] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running
	I0520 11:50:59.998639   58398 system_pods.go:74] duration metric: took 3.763773626s to wait for pod list to return data ...
	I0520 11:50:59.998645   58398 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:51:00.001599   58398 default_sa.go:45] found service account: "default"
	I0520 11:51:00.001619   58398 default_sa.go:55] duration metric: took 2.967907ms for default service account to be created ...
	I0520 11:51:00.001626   58398 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:51:00.007124   58398 system_pods.go:86] 8 kube-system pods found
	I0520 11:51:00.007151   58398 system_pods.go:89] "coredns-7db6d8ff4d-64txr" [e755d2ac-2e0d-4088-bf96-9d8aadd69987] Running
	I0520 11:51:00.007156   58398 system_pods.go:89] "etcd-embed-certs-274976" [c63668dc-ecc7-4a7e-a871-c36197962f4f] Running
	I0520 11:51:00.007161   58398 system_pods.go:89] "kube-apiserver-embed-certs-274976" [917adc88-566c-4b19-bd8d-079fa6537d65] Running
	I0520 11:51:00.007168   58398 system_pods.go:89] "kube-controller-manager-embed-certs-274976" [5a32647f-b177-4876-b92e-f21deb93f357] Running
	I0520 11:51:00.007175   58398 system_pods.go:89] "kube-proxy-6n9d5" [f2e07292-bd7f-446a-aa93-3975a4860ac2] Running
	I0520 11:51:00.007182   58398 system_pods.go:89] "kube-scheduler-embed-certs-274976" [6dcba9bb-0ce4-4875-a39a-0548b4fa3c52] Running
	I0520 11:51:00.007194   58398 system_pods.go:89] "metrics-server-569cc877fc-ll4n8" [33cfa4c5-e053-4109-8a22-9abc96828f1b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:00.007205   58398 system_pods.go:89] "storage-provisioner" [86700a9a-3778-4da8-8b62-e87269b0ffd2] Running
	I0520 11:51:00.007215   58398 system_pods.go:126] duration metric: took 5.582307ms to wait for k8s-apps to be running ...
	I0520 11:51:00.007226   58398 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:51:00.007281   58398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:51:00.023780   58398 system_svc.go:56] duration metric: took 16.545966ms WaitForService to wait for kubelet
	I0520 11:51:00.023806   58398 kubeadm.go:576] duration metric: took 4m22.463864802s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:51:00.023830   58398 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:51:00.026757   58398 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:51:00.026775   58398 node_conditions.go:123] node cpu capacity is 2
	I0520 11:51:00.026788   58398 node_conditions.go:105] duration metric: took 2.951465ms to run NodePressure ...
	I0520 11:51:00.026798   58398 start.go:240] waiting for startup goroutines ...
	I0520 11:51:00.026804   58398 start.go:245] waiting for cluster config update ...
	I0520 11:51:00.026814   58398 start.go:254] writing updated cluster config ...
	I0520 11:51:00.027094   58398 ssh_runner.go:195] Run: rm -f paused
	I0520 11:51:00.078104   58398 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:51:00.079923   58398 out.go:177] * Done! kubectl is now configured to use "embed-certs-274976" cluster and "default" namespace by default
	I0520 11:50:57.885337   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:00.385874   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:00.239814   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:00.240101   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:50:57.554979   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:50:59.555797   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:02.886892   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:05.384747   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:02.060543   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:04.555457   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:06.556619   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:07.386407   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:09.885284   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:10.240955   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:10.241197   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:09.055313   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:11.056015   58840 pod_ready.go:102] pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:11.886887   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:13.887204   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:15.888591   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:12.055930   58840 pod_ready.go:81] duration metric: took 4m0.00701193s for pod "metrics-server-569cc877fc-cdjmz" in "kube-system" namespace to be "Ready" ...
	E0520 11:51:12.055967   58840 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:51:12.055978   58840 pod_ready.go:38] duration metric: took 4m5.378750595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:51:12.055998   58840 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:51:12.056034   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:12.056101   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:12.115475   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:12.115503   58840 cri.go:89] found id: ""
	I0520 11:51:12.115513   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:12.115568   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.120850   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:12.120940   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:12.161022   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:12.161055   58840 cri.go:89] found id: ""
	I0520 11:51:12.161062   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:12.161130   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.165525   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:12.165586   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:12.211862   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:12.211890   58840 cri.go:89] found id: ""
	I0520 11:51:12.211899   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:12.211959   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.217037   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:12.217114   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:12.256985   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:12.257010   58840 cri.go:89] found id: ""
	I0520 11:51:12.257018   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:12.257064   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.261765   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:12.261825   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:12.299745   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:12.299773   58840 cri.go:89] found id: ""
	I0520 11:51:12.299781   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:12.299842   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.307681   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:12.307752   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:12.346729   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:12.346754   58840 cri.go:89] found id: ""
	I0520 11:51:12.346763   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:12.346819   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.352034   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:12.352096   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:12.394387   58840 cri.go:89] found id: ""
	I0520 11:51:12.394405   58840 logs.go:276] 0 containers: []
	W0520 11:51:12.394412   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:12.394416   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:12.394460   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:12.434198   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:12.434224   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:12.434229   58840 cri.go:89] found id: ""
	I0520 11:51:12.434238   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:12.434298   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.439424   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:12.444096   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:12.444113   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:12.501635   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:12.501668   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:12.516529   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:12.516557   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:12.662690   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:12.662723   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:12.724415   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:12.724455   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:12.767417   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:12.767446   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:12.815739   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:12.815775   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:12.860515   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:12.860543   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:12.900348   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:12.900377   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:12.948489   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:12.948518   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:12.996999   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:12.997030   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:13.034866   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:13.034896   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:13.532828   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:13.532875   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:16.083150   58840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:51:16.106110   58840 api_server.go:72] duration metric: took 4m16.64390235s to wait for apiserver process to appear ...
	I0520 11:51:16.106155   58840 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:51:16.106198   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:16.106262   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:16.148418   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:16.148450   58840 cri.go:89] found id: ""
	I0520 11:51:16.148459   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:16.148527   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.153400   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:16.153456   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:16.196145   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:16.196167   58840 cri.go:89] found id: ""
	I0520 11:51:16.196175   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:16.196231   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.200792   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:16.200857   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:16.244411   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:16.244435   58840 cri.go:89] found id: ""
	I0520 11:51:16.244446   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:16.244497   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.249637   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:16.249707   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:16.288520   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:16.288542   58840 cri.go:89] found id: ""
	I0520 11:51:16.288550   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:16.288613   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.293307   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:16.293377   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:16.336963   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:16.336988   58840 cri.go:89] found id: ""
	I0520 11:51:16.336998   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:16.337053   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.342155   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:16.342236   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:16.390589   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:16.390610   58840 cri.go:89] found id: ""
	I0520 11:51:16.390617   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:16.390672   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.395319   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:16.395367   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:16.444947   58840 cri.go:89] found id: ""
	I0520 11:51:16.444974   58840 logs.go:276] 0 containers: []
	W0520 11:51:16.444985   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:16.445005   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:16.445069   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:16.485389   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:16.485414   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:16.485418   58840 cri.go:89] found id: ""
	I0520 11:51:16.485425   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:16.485481   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.490036   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:16.494650   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:16.494678   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:16.510820   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:16.510847   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:16.549583   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:16.549616   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:16.593009   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:16.593036   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:16.644523   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:16.644551   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:16.686326   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:16.686355   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:16.753308   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:16.753339   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:16.799568   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:16.799600   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:16.858422   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:16.858454   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:18.385869   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:20.386467   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:16.983556   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:16.983591   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:17.034851   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:17.034885   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:17.078220   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:17.078247   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:17.120946   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:17.120979   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:20.055408   58840 api_server.go:253] Checking apiserver healthz at https://192.168.61.221:8444/healthz ...
	I0520 11:51:20.059619   58840 api_server.go:279] https://192.168.61.221:8444/healthz returned 200:
	ok
	I0520 11:51:20.060649   58840 api_server.go:141] control plane version: v1.30.1
	I0520 11:51:20.060671   58840 api_server.go:131] duration metric: took 3.954509172s to wait for apiserver health ...
	I0520 11:51:20.060679   58840 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:51:20.060698   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:51:20.060745   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:51:20.099666   58840 cri.go:89] found id: "c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:20.099687   58840 cri.go:89] found id: ""
	I0520 11:51:20.099695   58840 logs.go:276] 1 containers: [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0]
	I0520 11:51:20.099746   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.104235   58840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:51:20.104286   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:51:20.162860   58840 cri.go:89] found id: "82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:20.162881   58840 cri.go:89] found id: ""
	I0520 11:51:20.162887   58840 logs.go:276] 1 containers: [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b]
	I0520 11:51:20.162936   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.168215   58840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:51:20.168267   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:51:20.208871   58840 cri.go:89] found id: "3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:20.208900   58840 cri.go:89] found id: ""
	I0520 11:51:20.208909   58840 logs.go:276] 1 containers: [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5]
	I0520 11:51:20.208982   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.213637   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:51:20.213717   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:51:20.253194   58840 cri.go:89] found id: "7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:20.253219   58840 cri.go:89] found id: ""
	I0520 11:51:20.253230   58840 logs.go:276] 1 containers: [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d]
	I0520 11:51:20.253286   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.258190   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:51:20.258261   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:51:20.299693   58840 cri.go:89] found id: "03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:20.299713   58840 cri.go:89] found id: ""
	I0520 11:51:20.299720   58840 logs.go:276] 1 containers: [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b]
	I0520 11:51:20.299779   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.308104   58840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:51:20.308165   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:51:20.348954   58840 cri.go:89] found id: "78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:20.348977   58840 cri.go:89] found id: ""
	I0520 11:51:20.348985   58840 logs.go:276] 1 containers: [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2]
	I0520 11:51:20.349030   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.353738   58840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:51:20.353798   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:51:20.388466   58840 cri.go:89] found id: ""
	I0520 11:51:20.388486   58840 logs.go:276] 0 containers: []
	W0520 11:51:20.388496   58840 logs.go:278] No container was found matching "kindnet"
	I0520 11:51:20.388504   58840 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0520 11:51:20.388564   58840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0520 11:51:20.430628   58840 cri.go:89] found id: "5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:20.430653   58840 cri.go:89] found id: "254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:20.430659   58840 cri.go:89] found id: ""
	I0520 11:51:20.430668   58840 logs.go:276] 2 containers: [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7]
	I0520 11:51:20.430714   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.435241   58840 ssh_runner.go:195] Run: which crictl
	I0520 11:51:20.439312   58840 logs.go:123] Gathering logs for storage-provisioner [5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9] ...
	I0520 11:51:20.439332   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5555dedde072c4e728a8abc37e3146fa3fda4d20054a02cd8b529a437525c1c9"
	I0520 11:51:20.476361   58840 logs.go:123] Gathering logs for storage-provisioner [254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7] ...
	I0520 11:51:20.476388   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 254a32d2ee69bb67eae60924dfdeab23efd27d074875e633582a4571ec8c36f7"
	I0520 11:51:20.514317   58840 logs.go:123] Gathering logs for kubelet ...
	I0520 11:51:20.514345   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:51:20.566557   58840 logs.go:123] Gathering logs for dmesg ...
	I0520 11:51:20.566604   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:51:20.582249   58840 logs.go:123] Gathering logs for kube-apiserver [c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0] ...
	I0520 11:51:20.582279   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c254befba5c81d1982155fad90d4cb79f90434c401fa4f5de88cf29f8d6458f0"
	I0520 11:51:20.639530   58840 logs.go:123] Gathering logs for coredns [3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5] ...
	I0520 11:51:20.639568   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc749b082254a64a8d3e9f2a19c48d974713b269836687cec2db9dbb1fdbef5"
	I0520 11:51:20.676804   58840 logs.go:123] Gathering logs for kube-scheduler [7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d] ...
	I0520 11:51:20.676837   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d443d30c89f08291aff07f6722a2844cd014d9d4774cb989c3c10e69a1e265d"
	I0520 11:51:20.713661   58840 logs.go:123] Gathering logs for kube-proxy [03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b] ...
	I0520 11:51:20.713691   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a8b6ec44d3aed1c2341d421479ec8c6e208b9653ecd45a7c4439f78e86bc0b"
	I0520 11:51:20.757100   58840 logs.go:123] Gathering logs for container status ...
	I0520 11:51:20.757125   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:51:20.805824   58840 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:51:20.805859   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:51:20.921508   58840 logs.go:123] Gathering logs for etcd [82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b] ...
	I0520 11:51:20.921540   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82682648d9acf597bae5d385a0efafc45041d11cc7c94ba9a52b9452e80a9e7b"
	I0520 11:51:20.968752   58840 logs.go:123] Gathering logs for kube-controller-manager [78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2] ...
	I0520 11:51:20.968785   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78120ead890c007ea4d09405618bc46bf86d40bc962cb8f21570e33b343e45e2"
	I0520 11:51:21.024773   58840 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:51:21.024802   58840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:51:23.915504   58840 system_pods.go:59] 8 kube-system pods found
	I0520 11:51:23.915536   58840 system_pods.go:61] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running
	I0520 11:51:23.915540   58840 system_pods.go:61] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running
	I0520 11:51:23.915544   58840 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running
	I0520 11:51:23.915549   58840 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running
	I0520 11:51:23.915552   58840 system_pods.go:61] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:51:23.915555   58840 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running
	I0520 11:51:23.915562   58840 system_pods.go:61] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:23.915566   58840 system_pods.go:61] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:51:23.915574   58840 system_pods.go:74] duration metric: took 3.854889521s to wait for pod list to return data ...
	I0520 11:51:23.915582   58840 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:51:23.917651   58840 default_sa.go:45] found service account: "default"
	I0520 11:51:23.917672   58840 default_sa.go:55] duration metric: took 2.084872ms for default service account to be created ...
	I0520 11:51:23.917679   58840 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:51:23.922566   58840 system_pods.go:86] 8 kube-system pods found
	I0520 11:51:23.922589   58840 system_pods.go:89] "coredns-7db6d8ff4d-x92p9" [241b883a-458a-4b16-99b3-03b91aec4342] Running
	I0520 11:51:23.922594   58840 system_pods.go:89] "etcd-default-k8s-diff-port-668445" [70b552b7-b71e-4e2a-a336-38b248653ed3] Running
	I0520 11:51:23.922599   58840 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-668445" [6c6f9075-0d9c-4442-98a7-44fb0cfc5756] Running
	I0520 11:51:23.922604   58840 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-668445" [1e69e6a3-ea49-4b0b-8bbf-4cb867993e01] Running
	I0520 11:51:23.922608   58840 system_pods.go:89] "kube-proxy-7ddtk" [6b50634b-ce22-460b-a740-ed81a498ce63] Running
	I0520 11:51:23.922612   58840 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-668445" [3454105d-2e2b-44fc-a8bc-ad1ad03a74da] Running
	I0520 11:51:23.922618   58840 system_pods.go:89] "metrics-server-569cc877fc-cdjmz" [532bffab-0dd1-4097-b283-cb48a3010e23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:51:23.922623   58840 system_pods.go:89] "storage-provisioner" [07fd4b29-3c5f-4437-a696-1a3ec55d5285] Running
	I0520 11:51:23.922630   58840 system_pods.go:126] duration metric: took 4.946206ms to wait for k8s-apps to be running ...
	I0520 11:51:23.922638   58840 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:51:23.922679   58840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:51:23.941348   58840 system_svc.go:56] duration metric: took 18.700356ms WaitForService to wait for kubelet
	I0520 11:51:23.941388   58840 kubeadm.go:576] duration metric: took 4m24.479184818s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:51:23.941414   58840 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:51:23.945372   58840 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:51:23.945400   58840 node_conditions.go:123] node cpu capacity is 2
	I0520 11:51:23.945414   58840 node_conditions.go:105] duration metric: took 3.993897ms to run NodePressure ...
	I0520 11:51:23.945428   58840 start.go:240] waiting for startup goroutines ...
	I0520 11:51:23.945437   58840 start.go:245] waiting for cluster config update ...
	I0520 11:51:23.945450   58840 start.go:254] writing updated cluster config ...
	I0520 11:51:23.945739   58840 ssh_runner.go:195] Run: rm -f paused
	I0520 11:51:23.997255   58840 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:51:23.999485   58840 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-668445" cluster and "default" namespace by default
	I0520 11:51:22.386549   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:24.885200   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:26.887178   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:29.384900   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:30.242152   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:51:30.242431   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:51:31.386214   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:33.386309   57519 pod_ready.go:102] pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace has status "Ready":"False"
	I0520 11:51:35.379767   57519 pod_ready.go:81] duration metric: took 4m0.000946428s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" ...
	E0520 11:51:35.379806   57519 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-v2zh9" in "kube-system" namespace to be "Ready" (will not retry!)
	I0520 11:51:35.379826   57519 pod_ready.go:38] duration metric: took 4m4.525892516s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:51:35.379854   57519 kubeadm.go:591] duration metric: took 4m13.966188007s to restartPrimaryControlPlane
	W0520 11:51:35.379905   57519 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0520 11:51:35.379929   57519 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:07.074203   57519 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.694253213s)
	I0520 11:52:07.074270   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:07.089871   57519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 11:52:07.100708   57519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:07.111260   57519 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:07.111283   57519 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:07.111333   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:07.121864   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:07.121912   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:07.131426   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:07.140637   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:07.140693   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:07.150305   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:07.160398   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:07.160464   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:07.170170   57519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:07.179684   57519 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:07.179740   57519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:07.189036   57519 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:07.242553   57519 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 11:52:07.242606   57519 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:07.396088   57519 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:07.396232   57519 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:07.396391   57519 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:07.612882   57519 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:07.614941   57519 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:07.615033   57519 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:07.615124   57519 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:07.615249   57519 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:07.615353   57519 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:07.615465   57519 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:07.615543   57519 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:07.615641   57519 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:07.615726   57519 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:07.615815   57519 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:07.615915   57519 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:07.615970   57519 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:07.616047   57519 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:07.744190   57519 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:08.012140   57519 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 11:52:08.080751   57519 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:08.369179   57519 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:08.507095   57519 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:08.507736   57519 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:08.510271   57519 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:10.244270   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:10.244870   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:10.244909   58316 kubeadm.go:309] 
	I0520 11:52:10.245019   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:52:10.245121   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:52:10.245131   58316 kubeadm.go:309] 
	I0520 11:52:10.245213   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:52:10.245293   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:52:10.245550   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:52:10.245566   58316 kubeadm.go:309] 
	I0520 11:52:10.245819   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:52:10.245900   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:52:10.245977   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:52:10.245987   58316 kubeadm.go:309] 
	I0520 11:52:10.246241   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:52:10.246437   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:52:10.246450   58316 kubeadm.go:309] 
	I0520 11:52:10.246702   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:52:10.246915   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:52:10.247106   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:52:10.247366   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:52:10.247396   58316 kubeadm.go:309] 
	I0520 11:52:10.247656   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:10.247823   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:52:10.248110   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0520 11:52:10.248219   58316 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0520 11:52:10.248277   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0520 11:52:10.736634   58316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:10.753534   58316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 11:52:10.766627   58316 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 11:52:10.766653   58316 kubeadm.go:156] found existing configuration files:
	
	I0520 11:52:10.766706   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 11:52:10.778676   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 11:52:10.778728   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 11:52:10.791073   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 11:52:10.801084   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 11:52:10.801142   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 11:52:10.811127   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.820319   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 11:52:10.820389   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 11:52:10.830024   58316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 11:52:10.839789   58316 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 11:52:10.839850   58316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 11:52:10.849670   58316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0520 11:52:10.930808   58316 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0520 11:52:10.930892   58316 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 11:52:11.097490   58316 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 11:52:11.097648   58316 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 11:52:11.097813   58316 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 11:52:11.296124   58316 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 11:52:08.512322   57519 out.go:204]   - Booting up control plane ...
	I0520 11:52:08.512437   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:08.512530   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:08.512609   57519 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:08.533828   57519 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:08.534843   57519 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:08.534903   57519 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:08.670084   57519 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 11:52:08.670192   57519 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 11:52:09.172058   57519 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.003696ms
	I0520 11:52:09.172181   57519 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 11:52:11.298250   58316 out.go:204]   - Generating certificates and keys ...
	I0520 11:52:11.298369   58316 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 11:52:11.298456   58316 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 11:52:11.300821   58316 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0520 11:52:11.300961   58316 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0520 11:52:11.301094   58316 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0520 11:52:11.301178   58316 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0520 11:52:11.301283   58316 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0520 11:52:11.301374   58316 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0520 11:52:11.301475   58316 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0520 11:52:11.301589   58316 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0520 11:52:11.301645   58316 kubeadm.go:309] [certs] Using the existing "sa" key
	I0520 11:52:11.301723   58316 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 11:52:11.441179   58316 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 11:52:11.617104   58316 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 11:52:11.918441   58316 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 11:52:12.107514   58316 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 11:52:12.132712   58316 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 11:52:12.134501   58316 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 11:52:12.134577   58316 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 11:52:12.307559   58316 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 11:52:14.174127   57519 kubeadm.go:309] [api-check] The API server is healthy after 5.002000599s
	I0520 11:52:14.189265   57519 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 11:52:14.707457   57519 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 11:52:14.731273   57519 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 11:52:14.731480   57519 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-814599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 11:52:14.746881   57519 kubeadm.go:309] [bootstrap-token] Using token: 3eyb61.mp2wh3ubkzpzouqm
	I0520 11:52:14.748381   57519 out.go:204]   - Configuring RBAC rules ...
	I0520 11:52:14.748539   57519 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 11:52:14.764426   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 11:52:14.773450   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 11:52:14.778770   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 11:52:14.782819   57519 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 11:52:14.787255   57519 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 11:52:14.900560   57519 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 11:52:15.342284   57519 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 11:52:15.901553   57519 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 11:52:15.904310   57519 kubeadm.go:309] 
	I0520 11:52:15.904380   57519 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 11:52:15.904389   57519 kubeadm.go:309] 
	I0520 11:52:15.904507   57519 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 11:52:15.904545   57519 kubeadm.go:309] 
	I0520 11:52:15.904578   57519 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 11:52:15.904653   57519 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 11:52:15.904713   57519 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 11:52:15.904723   57519 kubeadm.go:309] 
	I0520 11:52:15.904813   57519 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 11:52:15.904824   57519 kubeadm.go:309] 
	I0520 11:52:15.904910   57519 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 11:52:15.904935   57519 kubeadm.go:309] 
	I0520 11:52:15.905012   57519 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 11:52:15.905113   57519 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 11:52:15.905218   57519 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 11:52:15.905239   57519 kubeadm.go:309] 
	I0520 11:52:15.905326   57519 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 11:52:15.905433   57519 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 11:52:15.905443   57519 kubeadm.go:309] 
	I0520 11:52:15.905534   57519 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3eyb61.mp2wh3ubkzpzouqm \
	I0520 11:52:15.905675   57519 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 \
	I0520 11:52:15.905725   57519 kubeadm.go:309] 	--control-plane 
	I0520 11:52:15.905738   57519 kubeadm.go:309] 
	I0520 11:52:15.905851   57519 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 11:52:15.905862   57519 kubeadm.go:309] 
	I0520 11:52:15.905989   57519 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3eyb61.mp2wh3ubkzpzouqm \
	I0520 11:52:15.906131   57519 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1d7bb13b5288288cf4da2541eed82538cfba56046991ef838513da538c0e7d18 
	I0520 11:52:15.907657   57519 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:52:15.907690   57519 cni.go:84] Creating CNI manager for ""
	I0520 11:52:15.907703   57519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 11:52:15.909664   57519 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 11:52:15.911132   57519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 11:52:15.927634   57519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 11:52:15.946135   57519 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 11:52:15.946282   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:15.946298   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-814599 minikube.k8s.io/updated_at=2024_05_20T11_52_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=no-preload-814599 minikube.k8s.io/primary=true
	I0520 11:52:16.161431   57519 ops.go:34] apiserver oom_adj: -16
	I0520 11:52:16.166400   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:12.309392   58316 out.go:204]   - Booting up control plane ...
	I0520 11:52:12.309529   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 11:52:12.321163   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 11:52:12.322464   58316 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 11:52:12.323504   58316 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 11:52:12.326643   58316 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0520 11:52:16.666687   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:17.167351   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:17.666800   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:18.167094   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:18.667154   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:19.167261   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:19.666780   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:20.167002   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:20.667078   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:21.167229   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:21.666511   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:22.167275   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:22.666512   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:23.166471   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:23.667393   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:24.167417   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:24.667387   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:25.167046   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:25.666728   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:26.167118   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:26.667450   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:27.166833   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:27.666776   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.167244   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.666741   57519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 11:52:28.758525   57519 kubeadm.go:1107] duration metric: took 12.81228323s to wait for elevateKubeSystemPrivileges
	W0520 11:52:28.758571   57519 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 11:52:28.758582   57519 kubeadm.go:393] duration metric: took 5m7.403869165s to StartCluster
	I0520 11:52:28.758599   57519 settings.go:142] acquiring lock: {Name:mk27ea5080e11ba62a94bcd95619446047d9c0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:28.758679   57519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:52:28.760488   57519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/kubeconfig: {Name:mkfc9a9af427a200ae04468a110b955cd58714b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:52:28.760723   57519 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0520 11:52:28.762410   57519 out.go:177] * Verifying Kubernetes components...
	I0520 11:52:28.760783   57519 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:52:28.760966   57519 config.go:182] Loaded profile config "no-preload-814599": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:52:28.763493   57519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:52:28.763499   57519 addons.go:69] Setting storage-provisioner=true in profile "no-preload-814599"
	I0520 11:52:28.763527   57519 addons.go:234] Setting addon storage-provisioner=true in "no-preload-814599"
	I0520 11:52:28.763528   57519 addons.go:69] Setting metrics-server=true in profile "no-preload-814599"
	W0520 11:52:28.763537   57519 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:52:28.763527   57519 addons.go:69] Setting default-storageclass=true in profile "no-preload-814599"
	I0520 11:52:28.763558   57519 addons.go:234] Setting addon metrics-server=true in "no-preload-814599"
	I0520 11:52:28.763562   57519 host.go:66] Checking if "no-preload-814599" exists ...
	W0520 11:52:28.763568   57519 addons.go:243] addon metrics-server should already be in state true
	I0520 11:52:28.763582   57519 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-814599"
	I0520 11:52:28.763589   57519 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:52:28.763934   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.763952   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.763984   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.764022   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.764047   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.764081   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.778949   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0520 11:52:28.778995   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0520 11:52:28.779481   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.779522   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.780023   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.780053   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.780159   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.780180   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.780498   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.780532   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.780680   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.780720   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0520 11:52:28.781084   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.781094   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.781127   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.781605   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.781627   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.781951   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.782608   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.782651   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.784981   57519 addons.go:234] Setting addon default-storageclass=true in "no-preload-814599"
	W0520 11:52:28.785010   57519 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:52:28.785041   57519 host.go:66] Checking if "no-preload-814599" exists ...
	I0520 11:52:28.785399   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.785423   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.797463   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0520 11:52:28.797488   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I0520 11:52:28.797958   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.798050   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.798571   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.798589   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.798634   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.798647   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.799040   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.799043   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.799206   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.799250   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.802002   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.804188   57519 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:52:28.802940   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.805408   57519 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:52:28.804469   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0520 11:52:28.805363   57519 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:52:28.806586   57519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:52:28.806592   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:52:28.806601   57519 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:52:28.806607   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.806613   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.806965   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.807364   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.807375   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.807765   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.808210   57519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:52:28.808226   57519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:52:28.810837   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.810866   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811278   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.811300   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811477   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.811716   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.811753   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.811769   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.811856   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.812027   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.812308   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.812494   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.812643   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.812795   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.824082   57519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I0520 11:52:28.824580   57519 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:52:28.825117   57519 main.go:141] libmachine: Using API Version  1
	I0520 11:52:28.825144   57519 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:52:28.825492   57519 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:52:28.825709   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetState
	I0520 11:52:28.827343   57519 main.go:141] libmachine: (no-preload-814599) Calling .DriverName
	I0520 11:52:28.827545   57519 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:52:28.827561   57519 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:52:28.827574   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHHostname
	I0520 11:52:28.830623   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.831261   57519 main.go:141] libmachine: (no-preload-814599) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:c8:fb", ip: ""} in network mk-no-preload-814599: {Iface:virbr1 ExpiryTime:2024-05-20 12:46:55 +0000 UTC Type:0 Mac:52:54:00:16:c8:fb Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:no-preload-814599 Clientid:01:52:54:00:16:c8:fb}
	I0520 11:52:28.831287   57519 main.go:141] libmachine: (no-preload-814599) DBG | domain no-preload-814599 has defined IP address 192.168.39.185 and MAC address 52:54:00:16:c8:fb in network mk-no-preload-814599
	I0520 11:52:28.831485   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHPort
	I0520 11:52:28.831625   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHKeyPath
	I0520 11:52:28.831783   57519 main.go:141] libmachine: (no-preload-814599) Calling .GetSSHUsername
	I0520 11:52:28.831912   57519 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/no-preload-814599/id_rsa Username:docker}
	I0520 11:52:28.984120   57519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:52:29.006910   57519 node_ready.go:35] waiting up to 6m0s for node "no-preload-814599" to be "Ready" ...
	I0520 11:52:29.016171   57519 node_ready.go:49] node "no-preload-814599" has status "Ready":"True"
	I0520 11:52:29.016197   57519 node_ready.go:38] duration metric: took 9.258398ms for node "no-preload-814599" to be "Ready" ...
	I0520 11:52:29.016207   57519 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:52:29.024085   57519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:29.092559   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:52:29.094377   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:52:29.094395   57519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:52:29.112741   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:52:29.133783   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:52:29.133804   57519 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:52:29.202727   57519 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:52:29.202756   57519 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:52:29.347632   57519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:52:30.143576   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.050979242s)
	I0520 11:52:30.143633   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.143645   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.143652   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.030877915s)
	I0520 11:52:30.143694   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.143706   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.143986   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.143960   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.144014   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144026   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.144037   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.144040   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144053   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.144061   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.144068   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.144045   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.144326   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.144345   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.146044   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.146057   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.146068   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.182464   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.182492   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.182802   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.182817   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.557632   57519 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.209948791s)
	I0520 11:52:30.557698   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.557715   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.557972   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.557992   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.557996   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.558011   57519 main.go:141] libmachine: Making call to close driver server
	I0520 11:52:30.558022   57519 main.go:141] libmachine: (no-preload-814599) Calling .Close
	I0520 11:52:30.558252   57519 main.go:141] libmachine: Successfully made call to close driver server
	I0520 11:52:30.558267   57519 main.go:141] libmachine: Making call to close connection to plugin binary
	I0520 11:52:30.558277   57519 addons.go:470] Verifying addon metrics-server=true in "no-preload-814599"
	I0520 11:52:30.558302   57519 main.go:141] libmachine: (no-preload-814599) DBG | Closing plugin on server side
	I0520 11:52:30.560471   57519 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0520 11:52:30.561889   57519 addons.go:505] duration metric: took 1.801104362s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0520 11:52:31.030297   57519 pod_ready.go:102] pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace has status "Ready":"False"
	I0520 11:52:31.538984   57519 pod_ready.go:92] pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.539002   57519 pod_ready.go:81] duration metric: took 2.514888376s for pod "coredns-7db6d8ff4d-8qdbk" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.539011   57519 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.544069   57519 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.544089   57519 pod_ready.go:81] duration metric: took 5.072004ms for pod "coredns-7db6d8ff4d-kbz7d" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.544097   57519 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.549299   57519 pod_ready.go:92] pod "etcd-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.549315   57519 pod_ready.go:81] duration metric: took 5.212738ms for pod "etcd-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.549323   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.555210   57519 pod_ready.go:92] pod "kube-apiserver-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.555229   57519 pod_ready.go:81] duration metric: took 5.899536ms for pod "kube-apiserver-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.555263   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.560438   57519 pod_ready.go:92] pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.560454   57519 pod_ready.go:81] duration metric: took 5.183454ms for pod "kube-controller-manager-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.560463   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wkfkc" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.928261   57519 pod_ready.go:92] pod "kube-proxy-wkfkc" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:31.928280   57519 pod_ready.go:81] duration metric: took 367.811223ms for pod "kube-proxy-wkfkc" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:31.928290   57519 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:32.328564   57519 pod_ready.go:92] pod "kube-scheduler-no-preload-814599" in "kube-system" namespace has status "Ready":"True"
	I0520 11:52:32.328589   57519 pod_ready.go:81] duration metric: took 400.292676ms for pod "kube-scheduler-no-preload-814599" in "kube-system" namespace to be "Ready" ...
	I0520 11:52:32.328598   57519 pod_ready.go:38] duration metric: took 3.31238177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:52:32.328612   57519 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:52:32.328672   57519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:52:32.346546   57519 api_server.go:72] duration metric: took 3.585793393s to wait for apiserver process to appear ...
	I0520 11:52:32.346575   57519 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:52:32.346598   57519 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0520 11:52:32.351109   57519 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0520 11:52:32.352044   57519 api_server.go:141] control plane version: v1.30.1
	I0520 11:52:32.352070   57519 api_server.go:131] duration metric: took 5.485989ms to wait for apiserver health ...
	I0520 11:52:32.352080   57519 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 11:52:32.532017   57519 system_pods.go:59] 9 kube-system pods found
	I0520 11:52:32.532048   57519 system_pods.go:61] "coredns-7db6d8ff4d-8qdbk" [98bbe29a-5bd1-4568-ac29-5bf030c38f79] Running
	I0520 11:52:32.532053   57519 system_pods.go:61] "coredns-7db6d8ff4d-kbz7d" [d36efcc0-4f91-4c5d-8781-0bb5f22187ab] Running
	I0520 11:52:32.532056   57519 system_pods.go:61] "etcd-no-preload-814599" [d0685317-5464-4eb1-a81f-eca1a644f86a] Running
	I0520 11:52:32.532060   57519 system_pods.go:61] "kube-apiserver-no-preload-814599" [4a6f40fe-3c8f-4afc-a731-ccb80c5840ed] Running
	I0520 11:52:32.532064   57519 system_pods.go:61] "kube-controller-manager-no-preload-814599" [b6f3ac72-bd6f-421d-85f6-905a362be527] Running
	I0520 11:52:32.532067   57519 system_pods.go:61] "kube-proxy-wkfkc" [679b7143-7bff-47f3-ab69-cd87eec68013] Running
	I0520 11:52:32.532071   57519 system_pods.go:61] "kube-scheduler-no-preload-814599" [0e7d1cb0-2e44-4ae5-b7fd-e68f88f7fc04] Running
	I0520 11:52:32.532077   57519 system_pods.go:61] "metrics-server-569cc877fc-lggvs" [72eeb6ee-27ce-4485-b364-5c9b87283d29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:52:32.532081   57519 system_pods.go:61] "storage-provisioner" [1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b] Running
	I0520 11:52:32.532094   57519 system_pods.go:74] duration metric: took 180.008393ms to wait for pod list to return data ...
	I0520 11:52:32.532103   57519 default_sa.go:34] waiting for default service account to be created ...
	I0520 11:52:32.729233   57519 default_sa.go:45] found service account: "default"
	I0520 11:52:32.729264   57519 default_sa.go:55] duration metric: took 197.154237ms for default service account to be created ...
	I0520 11:52:32.729278   57519 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 11:52:32.932686   57519 system_pods.go:86] 9 kube-system pods found
	I0520 11:52:32.932717   57519 system_pods.go:89] "coredns-7db6d8ff4d-8qdbk" [98bbe29a-5bd1-4568-ac29-5bf030c38f79] Running
	I0520 11:52:32.932764   57519 system_pods.go:89] "coredns-7db6d8ff4d-kbz7d" [d36efcc0-4f91-4c5d-8781-0bb5f22187ab] Running
	I0520 11:52:32.932777   57519 system_pods.go:89] "etcd-no-preload-814599" [d0685317-5464-4eb1-a81f-eca1a644f86a] Running
	I0520 11:52:32.932786   57519 system_pods.go:89] "kube-apiserver-no-preload-814599" [4a6f40fe-3c8f-4afc-a731-ccb80c5840ed] Running
	I0520 11:52:32.932796   57519 system_pods.go:89] "kube-controller-manager-no-preload-814599" [b6f3ac72-bd6f-421d-85f6-905a362be527] Running
	I0520 11:52:32.932805   57519 system_pods.go:89] "kube-proxy-wkfkc" [679b7143-7bff-47f3-ab69-cd87eec68013] Running
	I0520 11:52:32.932812   57519 system_pods.go:89] "kube-scheduler-no-preload-814599" [0e7d1cb0-2e44-4ae5-b7fd-e68f88f7fc04] Running
	I0520 11:52:32.932824   57519 system_pods.go:89] "metrics-server-569cc877fc-lggvs" [72eeb6ee-27ce-4485-b364-5c9b87283d29] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0520 11:52:32.932835   57519 system_pods.go:89] "storage-provisioner" [1d4400ee-3a8e-4ac6-bbab-b4a562c6e53b] Running
	I0520 11:52:32.932854   57519 system_pods.go:126] duration metric: took 203.569018ms to wait for k8s-apps to be running ...
	I0520 11:52:32.932867   57519 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 11:52:32.932945   57519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:52:32.948983   57519 system_svc.go:56] duration metric: took 16.10841ms WaitForService to wait for kubelet
	I0520 11:52:32.949014   57519 kubeadm.go:576] duration metric: took 4.188266242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:52:32.949035   57519 node_conditions.go:102] verifying NodePressure condition ...
	I0520 11:52:33.128478   57519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0520 11:52:33.128503   57519 node_conditions.go:123] node cpu capacity is 2
	I0520 11:52:33.128514   57519 node_conditions.go:105] duration metric: took 179.473758ms to run NodePressure ...
	I0520 11:52:33.128527   57519 start.go:240] waiting for startup goroutines ...
	I0520 11:52:33.128536   57519 start.go:245] waiting for cluster config update ...
	I0520 11:52:33.128548   57519 start.go:254] writing updated cluster config ...
	I0520 11:52:33.128860   57519 ssh_runner.go:195] Run: rm -f paused
	I0520 11:52:33.177360   57519 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 11:52:33.179409   57519 out.go:177] * Done! kubectl is now configured to use "no-preload-814599" cluster and "default" namespace by default
	I0520 11:52:52.328753   58316 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0520 11:52:52.329026   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:52.329268   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:52:57.329722   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:52:57.329960   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:07.330334   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:07.330605   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:53:27.331241   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:53:27.331509   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330498   58316 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0520 11:54:07.330756   58316 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0520 11:54:07.330768   58316 kubeadm.go:309] 
	I0520 11:54:07.330813   58316 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0520 11:54:07.330868   58316 kubeadm.go:309] 		timed out waiting for the condition
	I0520 11:54:07.330878   58316 kubeadm.go:309] 
	I0520 11:54:07.330924   58316 kubeadm.go:309] 	This error is likely caused by:
	I0520 11:54:07.330980   58316 kubeadm.go:309] 		- The kubelet is not running
	I0520 11:54:07.331142   58316 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0520 11:54:07.331178   58316 kubeadm.go:309] 
	I0520 11:54:07.331298   58316 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0520 11:54:07.331354   58316 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0520 11:54:07.331409   58316 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0520 11:54:07.331419   58316 kubeadm.go:309] 
	I0520 11:54:07.331564   58316 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0520 11:54:07.331675   58316 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0520 11:54:07.331685   58316 kubeadm.go:309] 
	I0520 11:54:07.331842   58316 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0520 11:54:07.331971   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0520 11:54:07.332096   58316 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0520 11:54:07.332190   58316 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0520 11:54:07.332204   58316 kubeadm.go:309] 
	I0520 11:54:07.332486   58316 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 11:54:07.332595   58316 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0520 11:54:07.332685   58316 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0520 11:54:07.332755   58316 kubeadm.go:393] duration metric: took 7m58.459127604s to StartCluster
	I0520 11:54:07.332796   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0520 11:54:07.332870   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0520 11:54:07.378602   58316 cri.go:89] found id: ""
	I0520 11:54:07.378631   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.378641   58316 logs.go:278] No container was found matching "kube-apiserver"
	I0520 11:54:07.378648   58316 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0520 11:54:07.378714   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0520 11:54:07.415695   58316 cri.go:89] found id: ""
	I0520 11:54:07.415724   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.415732   58316 logs.go:278] No container was found matching "etcd"
	I0520 11:54:07.415739   58316 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0520 11:54:07.415802   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0520 11:54:07.456681   58316 cri.go:89] found id: ""
	I0520 11:54:07.456706   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.456717   58316 logs.go:278] No container was found matching "coredns"
	I0520 11:54:07.456726   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0520 11:54:07.456781   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0520 11:54:07.495834   58316 cri.go:89] found id: ""
	I0520 11:54:07.495870   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.495881   58316 logs.go:278] No container was found matching "kube-scheduler"
	I0520 11:54:07.495889   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0520 11:54:07.495949   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0520 11:54:07.533674   58316 cri.go:89] found id: ""
	I0520 11:54:07.533698   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.533705   58316 logs.go:278] No container was found matching "kube-proxy"
	I0520 11:54:07.533710   58316 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0520 11:54:07.533759   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0520 11:54:07.571118   58316 cri.go:89] found id: ""
	I0520 11:54:07.571141   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.571148   58316 logs.go:278] No container was found matching "kube-controller-manager"
	I0520 11:54:07.571155   58316 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0520 11:54:07.571214   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0520 11:54:07.609066   58316 cri.go:89] found id: ""
	I0520 11:54:07.609091   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.609105   58316 logs.go:278] No container was found matching "kindnet"
	I0520 11:54:07.609114   58316 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0520 11:54:07.609176   58316 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0520 11:54:07.653029   58316 cri.go:89] found id: ""
	I0520 11:54:07.653057   58316 logs.go:276] 0 containers: []
	W0520 11:54:07.653068   58316 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0520 11:54:07.653080   58316 logs.go:123] Gathering logs for kubelet ...
	I0520 11:54:07.653095   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0520 11:54:07.707497   58316 logs.go:123] Gathering logs for dmesg ...
	I0520 11:54:07.707533   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:54:07.728740   58316 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:54:07.728772   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0520 11:54:07.809062   58316 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0520 11:54:07.809088   58316 logs.go:123] Gathering logs for CRI-O ...
	I0520 11:54:07.809104   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0520 11:54:07.911717   58316 logs.go:123] Gathering logs for container status ...
	I0520 11:54:07.911762   58316 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0520 11:54:07.952747   58316 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0520 11:54:07.952792   58316 out.go:239] * 
	W0520 11:54:07.952855   58316 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.952881   58316 out.go:239] * 
	W0520 11:54:07.954044   58316 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:54:07.957368   58316 out.go:177] 
	W0520 11:54:07.958433   58316 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0520 11:54:07.958491   58316 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0520 11:54:07.958518   58316 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0520 11:54:07.959662   58316 out.go:177] 
	
	
	==> CRI-O <==
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.623725731Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206796623699672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e3182c2-1773-4bf3-9787-66635c5dfed5 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.624459853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0f6c8d4-0c72-4e84-bf15-b3f8fb4cba4c name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.624507479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0f6c8d4-0c72-4e84-bf15-b3f8fb4cba4c name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.624549151Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d0f6c8d4-0c72-4e84-bf15-b3f8fb4cba4c name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.659591114Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31e830bb-7ae3-4ce1-842f-8174ee895085 name=/runtime.v1.RuntimeService/Version
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.659670452Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31e830bb-7ae3-4ce1-842f-8174ee895085 name=/runtime.v1.RuntimeService/Version
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.660893908Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b2b5473-6f05-483a-8e11-9a9415f6a020 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.661337392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206796661315224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b2b5473-6f05-483a-8e11-9a9415f6a020 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.661835565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=680ee8b9-ef4a-4b9c-b009-5d6fba849ffa name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.661889077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=680ee8b9-ef4a-4b9c-b009-5d6fba849ffa name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.661923422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=680ee8b9-ef4a-4b9c-b009-5d6fba849ffa name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.694580811Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f91de0c-569d-4954-a33d-8c6acb0420c8 name=/runtime.v1.RuntimeService/Version
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.694644138Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f91de0c-569d-4954-a33d-8c6acb0420c8 name=/runtime.v1.RuntimeService/Version
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.695490767Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81b37159-ab36-423d-9ee4-6557de8981f6 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.695858118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206796695841148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81b37159-ab36-423d-9ee4-6557de8981f6 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.696347101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6ca3ec2-187d-4fd3-8cd1-3e8cb3b32fbf name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.696390724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6ca3ec2-187d-4fd3-8cd1-3e8cb3b32fbf name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.696418493Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b6ca3ec2-187d-4fd3-8cd1-3e8cb3b32fbf name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.727673481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38138351-c206-475e-9f0b-3f14167c764b name=/runtime.v1.RuntimeService/Version
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.727760200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38138351-c206-475e-9f0b-3f14167c764b name=/runtime.v1.RuntimeService/Version
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.728668106Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af640a66-d969-4501-89b2-ad586570d0c7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.729082091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716206796729051280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af640a66-d969-4501-89b2-ad586570d0c7 name=/runtime.v1.ImageService/ImageFsInfo
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.729624839Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab696280-bc5a-4dbb-bb6a-ee397fccecf9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.729675688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab696280-bc5a-4dbb-bb6a-ee397fccecf9 name=/runtime.v1.RuntimeService/ListContainers
	May 20 12:06:36 old-k8s-version-210420 crio[654]: time="2024-05-20 12:06:36.729707372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ab696280-bc5a-4dbb-bb6a-ee397fccecf9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May20 11:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051778] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041197] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.512371] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.398190] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.611528] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May20 11:46] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.055467] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069914] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.180624] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.154603] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.255744] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.419712] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.061046] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.817677] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[ +12.205049] kauditd_printk_skb: 46 callbacks suppressed
	[May20 11:50] systemd-fstab-generator[5032]: Ignoring "noauto" option for root device
	[May20 11:52] systemd-fstab-generator[5321]: Ignoring "noauto" option for root device
	[  +0.070587] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:06:36 up 20 min,  0 users,  load average: 0.07, 0.02, 0.03
	Linux old-k8s-version-210420 5.10.207 #1 SMP Mon May 13 15:20:15 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc00082d7a0)
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]: goroutine 166 [select]:
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a37ef0, 0x4f0ac20, 0xc000119a90, 0x1, 0xc0001000c0)
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0009ea2a0, 0xc0001000c0)
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000823150, 0xc000857640)
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	May 20 12:06:35 old-k8s-version-210420 kubelet[6923]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	May 20 12:06:35 old-k8s-version-210420 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 20 12:06:35 old-k8s-version-210420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 20 12:06:36 old-k8s-version-210420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 150.
	May 20 12:06:36 old-k8s-version-210420 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 20 12:06:36 old-k8s-version-210420 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 20 12:06:36 old-k8s-version-210420 kubelet[6951]: I0520 12:06:36.221421    6951 server.go:416] Version: v1.20.0
	May 20 12:06:36 old-k8s-version-210420 kubelet[6951]: I0520 12:06:36.221774    6951 server.go:837] Client rotation is on, will bootstrap in background
	May 20 12:06:36 old-k8s-version-210420 kubelet[6951]: I0520 12:06:36.224737    6951 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 20 12:06:36 old-k8s-version-210420 kubelet[6951]: W0520 12:06:36.225760    6951 manager.go:159] Cannot detect current cgroup on cgroup v2
	May 20 12:06:36 old-k8s-version-210420 kubelet[6951]: I0520 12:06:36.226361    6951 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210420 -n old-k8s-version-210420
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 2 (219.436222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-210420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (203.42s)

                                                
                                    

Test pass (245/313)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 51.98
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.30.1/json-events 13.17
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.06
18 TestDownloadOnly/v1.30.1/DeleteAll 0.12
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.54
22 TestOffline 82.69
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 197.44
29 TestAddons/parallel/Registry 17.43
31 TestAddons/parallel/InspektorGadget 10.78
33 TestAddons/parallel/HelmTiller 14.45
35 TestAddons/parallel/CSI 55.15
36 TestAddons/parallel/Headlamp 13.11
37 TestAddons/parallel/CloudSpanner 5.58
38 TestAddons/parallel/LocalPath 17.14
39 TestAddons/parallel/NvidiaDevicePlugin 6.69
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.11
45 TestCertOptions 87.9
46 TestCertExpiration 277.16
48 TestForceSystemdFlag 53.17
49 TestForceSystemdEnv 48.92
51 TestKVMDriverInstallOrUpdate 5
56 TestErrorSpam/start 0.32
57 TestErrorSpam/status 0.7
58 TestErrorSpam/pause 1.5
59 TestErrorSpam/unpause 1.52
60 TestErrorSpam/stop 5.41
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 56.11
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 55.05
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.02
72 TestFunctional/serial/CacheCmd/cache/add_local 2.2
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
77 TestFunctional/serial/CacheCmd/cache/delete 0.08
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
80 TestFunctional/serial/ExtraConfig 33.08
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 1.54
83 TestFunctional/serial/LogsFileCmd 1.56
84 TestFunctional/serial/InvalidService 4.13
86 TestFunctional/parallel/ConfigCmd 0.29
87 TestFunctional/parallel/DashboardCmd 16.95
88 TestFunctional/parallel/DryRun 0.26
89 TestFunctional/parallel/InternationalLanguage 0.13
90 TestFunctional/parallel/StatusCmd 1.11
94 TestFunctional/parallel/ServiceCmdConnect 11.57
95 TestFunctional/parallel/AddonsCmd 0.11
96 TestFunctional/parallel/PersistentVolumeClaim 50.91
98 TestFunctional/parallel/SSHCmd 0.38
99 TestFunctional/parallel/CpCmd 1.25
100 TestFunctional/parallel/MySQL 28.57
101 TestFunctional/parallel/FileSync 0.32
102 TestFunctional/parallel/CertSync 1.28
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
110 TestFunctional/parallel/License 0.76
111 TestFunctional/parallel/Version/short 0.04
112 TestFunctional/parallel/Version/components 1
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
117 TestFunctional/parallel/ImageCommands/ImageBuild 4.35
118 TestFunctional/parallel/ImageCommands/Setup 2.22
119 TestFunctional/parallel/ServiceCmd/DeployApp 11.19
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.03
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.63
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.03
132 TestFunctional/parallel/ServiceCmd/List 0.32
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
135 TestFunctional/parallel/ServiceCmd/Format 0.41
136 TestFunctional/parallel/ServiceCmd/URL 0.43
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
138 TestFunctional/parallel/ProfileCmd/profile_list 0.33
139 TestFunctional/parallel/MountCmd/any-port 8.61
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.38
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.68
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.49
145 TestFunctional/parallel/MountCmd/specific-port 1.85
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.81
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 255.72
157 TestMultiControlPlane/serial/DeployApp 7.83
158 TestMultiControlPlane/serial/PingHostFromPods 1.16
159 TestMultiControlPlane/serial/AddWorkerNode 50
160 TestMultiControlPlane/serial/NodeLabels 0.06
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
162 TestMultiControlPlane/serial/CopyFile 12.34
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.35
171 TestMultiControlPlane/serial/RestartCluster 326.43
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
173 TestMultiControlPlane/serial/AddSecondaryNode 76.73
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
178 TestJSONOutput/start/Command 58.43
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.76
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.64
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.34
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.18
206 TestMainNoArgs 0.04
207 TestMinikubeProfile 89.26
210 TestMountStart/serial/StartWithMountFirst 25.67
211 TestMountStart/serial/VerifyMountFirst 0.35
212 TestMountStart/serial/StartWithMountSecond 29.09
213 TestMountStart/serial/VerifyMountSecond 0.36
214 TestMountStart/serial/DeleteFirst 0.87
215 TestMountStart/serial/VerifyMountPostDelete 0.36
216 TestMountStart/serial/Stop 1.76
217 TestMountStart/serial/RestartStopped 20.76
218 TestMountStart/serial/VerifyMountPostStop 0.35
221 TestMultiNode/serial/FreshStart2Nodes 123.71
222 TestMultiNode/serial/DeployApp2Nodes 5.58
223 TestMultiNode/serial/PingHostFrom2Pods 0.76
224 TestMultiNode/serial/AddNode 41.08
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.2
227 TestMultiNode/serial/CopyFile 6.91
228 TestMultiNode/serial/StopNode 2.27
229 TestMultiNode/serial/StartAfterStop 28.64
231 TestMultiNode/serial/DeleteNode 2.06
233 TestMultiNode/serial/RestartMultiNode 178.07
234 TestMultiNode/serial/ValidateNameConflict 42.23
241 TestScheduledStopUnix 113.64
245 TestRunningBinaryUpgrade 188.84
249 TestStoppedBinaryUpgrade/Setup 2.64
250 TestStoppedBinaryUpgrade/Upgrade 173.76
259 TestPause/serial/Start 77.98
261 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
264 TestNoKubernetes/serial/StartWithK8s 43.72
272 TestNetworkPlugins/group/false 2.62
276 TestNoKubernetes/serial/StartWithStopK8s 13.89
277 TestNoKubernetes/serial/Start 24.61
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
279 TestNoKubernetes/serial/ProfileList 1.07
280 TestNoKubernetes/serial/Stop 1.37
281 TestNoKubernetes/serial/StartNoArgs 42.89
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
286 TestStartStop/group/no-preload/serial/FirstStart 76.94
287 TestStartStop/group/no-preload/serial/DeployApp 9.27
288 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
291 TestStartStop/group/embed-certs/serial/FirstStart 53.93
293 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 98.6
294 TestStartStop/group/embed-certs/serial/DeployApp 11.28
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
298 TestStartStop/group/no-preload/serial/SecondStart 697.14
301 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
305 TestStartStop/group/old-k8s-version/serial/Stop 3.29
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
308 TestStartStop/group/embed-certs/serial/SecondStart 502.41
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 442.38
320 TestStartStop/group/newest-cni/serial/FirstStart 55.9
321 TestNetworkPlugins/group/auto/Start 98.7
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.31
324 TestStartStop/group/newest-cni/serial/Stop 11.35
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
326 TestStartStop/group/newest-cni/serial/SecondStart 39.35
327 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
330 TestStartStop/group/newest-cni/serial/Pause 2.42
331 TestNetworkPlugins/group/kindnet/Start 66.43
332 TestNetworkPlugins/group/auto/KubeletFlags 0.21
333 TestNetworkPlugins/group/auto/NetCatPod 11.24
334 TestNetworkPlugins/group/auto/DNS 0.17
335 TestNetworkPlugins/group/auto/Localhost 0.14
336 TestNetworkPlugins/group/auto/HairPin 0.13
337 TestNetworkPlugins/group/calico/Start 90.98
338 TestNetworkPlugins/group/custom-flannel/Start 109.47
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.2
341 TestNetworkPlugins/group/enable-default-cni/Start 85.64
342 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
343 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
344 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
345 TestNetworkPlugins/group/kindnet/DNS 0.17
346 TestNetworkPlugins/group/kindnet/Localhost 0.14
347 TestNetworkPlugins/group/kindnet/HairPin 0.13
348 TestNetworkPlugins/group/flannel/Start 101.31
349 TestNetworkPlugins/group/calico/ControllerPod 6.01
350 TestNetworkPlugins/group/calico/KubeletFlags 0.22
351 TestNetworkPlugins/group/calico/NetCatPod 10.25
352 TestNetworkPlugins/group/calico/DNS 0.17
353 TestNetworkPlugins/group/calico/Localhost 0.14
354 TestNetworkPlugins/group/calico/HairPin 0.15
355 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
356 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.92
357 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
358 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.37
359 TestNetworkPlugins/group/custom-flannel/DNS 0.19
360 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
361 TestNetworkPlugins/group/bridge/Start 64.04
362 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
363 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
364 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
365 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
366 TestNetworkPlugins/group/flannel/ControllerPod 6.01
367 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
368 TestNetworkPlugins/group/flannel/NetCatPod 11.21
369 TestNetworkPlugins/group/flannel/DNS 0.17
370 TestNetworkPlugins/group/flannel/Localhost 0.13
371 TestNetworkPlugins/group/flannel/HairPin 0.12
372 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
373 TestNetworkPlugins/group/bridge/NetCatPod 10.23
374 TestNetworkPlugins/group/bridge/DNS 0.19
375 TestNetworkPlugins/group/bridge/Localhost 0.17
376 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (51.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-865403 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-865403 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (51.97766351s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (51.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-865403
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-865403: exit status 85 (54.495004ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-865403 | jenkins | v1.33.1 | 20 May 24 10:20 UTC |          |
	|         | -p download-only-865403        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:20:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:20:30.356240   11173 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:20:30.356463   11173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:20:30.356471   11173 out.go:304] Setting ErrFile to fd 2...
	I0520 10:20:30.356475   11173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:20:30.356626   11173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	W0520 10:20:30.356736   11173 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18925-3819/.minikube/config/config.json: open /home/jenkins/minikube-integration/18925-3819/.minikube/config/config.json: no such file or directory
	I0520 10:20:30.357312   11173 out.go:298] Setting JSON to true
	I0520 10:20:30.358161   11173 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":173,"bootTime":1716200257,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 10:20:30.358228   11173 start.go:139] virtualization: kvm guest
	I0520 10:20:30.360733   11173 out.go:97] [download-only-865403] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 10:20:30.362177   11173 out.go:169] MINIKUBE_LOCATION=18925
	W0520 10:20:30.360824   11173 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 10:20:30.360874   11173 notify.go:220] Checking for updates...
	I0520 10:20:30.364706   11173 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:20:30.366019   11173 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:20:30.367284   11173 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:20:30.368503   11173 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0520 10:20:30.370773   11173 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 10:20:30.370971   11173 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:20:30.469974   11173 out.go:97] Using the kvm2 driver based on user configuration
	I0520 10:20:30.470019   11173 start.go:297] selected driver: kvm2
	I0520 10:20:30.470035   11173 start.go:901] validating driver "kvm2" against <nil>
	I0520 10:20:30.470362   11173 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:20:30.470495   11173 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 10:20:30.485161   11173 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 10:20:30.485218   11173 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:20:30.485715   11173 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0520 10:20:30.485871   11173 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 10:20:30.485897   11173 cni.go:84] Creating CNI manager for ""
	I0520 10:20:30.485904   11173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 10:20:30.485912   11173 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 10:20:30.485991   11173 start.go:340] cluster config:
	{Name:download-only-865403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-865403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:20:30.486185   11173 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:20:30.488015   11173 out.go:97] Downloading VM boot image ...
	I0520 10:20:30.488054   11173 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/iso/amd64/minikube-v1.33.1-1715594774-18869-amd64.iso
	I0520 10:20:40.516269   11173 out.go:97] Starting "download-only-865403" primary control-plane node in "download-only-865403" cluster
	I0520 10:20:40.516289   11173 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 10:20:40.624349   11173 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0520 10:20:40.624381   11173 cache.go:56] Caching tarball of preloaded images
	I0520 10:20:40.624614   11173 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 10:20:40.626466   11173 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0520 10:20:40.626480   11173 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0520 10:20:40.738661   11173 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0520 10:20:54.327087   11173 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0520 10:20:54.327173   11173 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0520 10:20:55.348654   11173 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0520 10:20:55.348987   11173 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/download-only-865403/config.json ...
	I0520 10:20:55.349015   11173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/download-only-865403/config.json: {Name:mk5b977501cd9ef1f9696c531a068ec3a492c3c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:55.349161   11173 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0520 10:20:55.349313   11173 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-865403 host does not exist
	  To start a cluster, run: "minikube start -p download-only-865403"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-865403
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (13.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-824408 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-824408 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.169778127s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (13.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-824408
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-824408: exit status 85 (56.094388ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-865403 | jenkins | v1.33.1 | 20 May 24 10:20 UTC |                     |
	|         | -p download-only-865403        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:21 UTC |
	| delete  | -p download-only-865403        | download-only-865403 | jenkins | v1.33.1 | 20 May 24 10:21 UTC | 20 May 24 10:21 UTC |
	| start   | -o=json --download-only        | download-only-824408 | jenkins | v1.33.1 | 20 May 24 10:21 UTC |                     |
	|         | -p download-only-824408        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:21:22
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:21:22.628840   11940 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:21:22.628952   11940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:21:22.628964   11940 out.go:304] Setting ErrFile to fd 2...
	I0520 10:21:22.628969   11940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:21:22.629153   11940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:21:22.629685   11940 out.go:298] Setting JSON to true
	I0520 10:21:22.630567   11940 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":226,"bootTime":1716200257,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 10:21:22.630619   11940 start.go:139] virtualization: kvm guest
	I0520 10:21:22.632870   11940 out.go:97] [download-only-824408] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 10:21:22.634607   11940 out.go:169] MINIKUBE_LOCATION=18925
	I0520 10:21:22.633021   11940 notify.go:220] Checking for updates...
	I0520 10:21:22.637177   11940 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:21:22.638516   11940 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:21:22.639767   11940 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:21:22.641174   11940 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0520 10:21:22.643823   11940 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 10:21:22.644148   11940 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:21:22.675803   11940 out.go:97] Using the kvm2 driver based on user configuration
	I0520 10:21:22.675845   11940 start.go:297] selected driver: kvm2
	I0520 10:21:22.675851   11940 start.go:901] validating driver "kvm2" against <nil>
	I0520 10:21:22.676171   11940 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:21:22.676236   11940 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18925-3819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0520 10:21:22.690866   11940 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0520 10:21:22.690902   11940 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:21:22.691369   11940 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0520 10:21:22.691518   11940 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 10:21:22.691569   11940 cni.go:84] Creating CNI manager for ""
	I0520 10:21:22.691581   11940 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0520 10:21:22.691590   11940 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 10:21:22.691637   11940 start.go:340] cluster config:
	{Name:download-only-824408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-824408 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:21:22.691726   11940 iso.go:125] acquiring lock: {Name:mk9a0add0607cefcce3bacafa064900f37df9c39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:21:22.693275   11940 out.go:97] Starting "download-only-824408" primary control-plane node in "download-only-824408" cluster
	I0520 10:21:22.693292   11940 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:21:22.804858   11940 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0520 10:21:22.804894   11940 cache.go:56] Caching tarball of preloaded images
	I0520 10:21:22.805074   11940 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0520 10:21:22.806785   11940 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0520 10:21:22.806798   11940 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 ...
	I0520 10:21:22.915366   11940 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:a8c8ea593b2bc93a46ce7b040a44f86d -> /home/jenkins/minikube-integration/18925-3819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-824408 host does not exist
	  To start a cluster, run: "minikube start -p download-only-824408"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-824408
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-680660 --alsologtostderr --binary-mirror http://127.0.0.1:45873 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-680660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-680660
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (82.69s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-932913 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-932913 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m21.866272822s)
helpers_test.go:175: Cleaning up "offline-crio-932913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-932913
--- PASS: TestOffline (82.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-807933
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-807933: exit status 85 (47.666046ms)

                                                
                                                
-- stdout --
	* Profile "addons-807933" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-807933"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-807933
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-807933: exit status 85 (47.973186ms)

                                                
                                                
-- stdout --
	* Profile "addons-807933" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-807933"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (197.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-807933 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-807933 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m17.43907306s)
--- PASS: TestAddons/Setup (197.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 15.100263ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-26dx9" [d90bb428-cf8d-481e-8467-e72e21658dcd] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005939075s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vw56p" [d92f24a6-e6e3-4551-991f-30a570338e6a] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005263743s
addons_test.go:340: (dbg) Run:  kubectl --context addons-807933 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-807933 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-807933 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.429988866s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 ip
2024/05/20 10:25:10 [DEBUG] GET http://192.168.39.86:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6mkh7" [c01067ce-5eba-482a-9d19-bab5c9a483b3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004345113s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-807933
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-807933: (5.774535163s)
--- PASS: TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.45s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 7.2632ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-lmqq4" [5327dee8-aaa4-44c5-9cf0-a904ab8b1b4a] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.007543385s
addons_test.go:473: (dbg) Run:  kubectl --context addons-807933 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-807933 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.847872851s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.45s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 6.751305ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-807933 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-807933 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [901b566b-b5ed-41c3-8aac-b71e19762cd3] Pending
helpers_test.go:344: "task-pv-pod" [901b566b-b5ed-41c3-8aac-b71e19762cd3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [901b566b-b5ed-41c3-8aac-b71e19762cd3] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004737527s
addons_test.go:584: (dbg) Run:  kubectl --context addons-807933 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-807933 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-807933 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-807933 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-807933 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-807933 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-807933 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [192c415e-a498-439b-8b42-fba4a8313407] Pending
helpers_test.go:344: "task-pv-pod-restore" [192c415e-a498-439b-8b42-fba4a8313407] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [192c415e-a498-439b-8b42-fba4a8313407] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003706173s
addons_test.go:626: (dbg) Run:  kubectl --context addons-807933 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-807933 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-807933 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-807933 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.702805745s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.15s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-807933 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-807933 --alsologtostderr -v=1: (1.101381973s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-tmsmw" [3a27a456-c5bb-455f-9228-1176f6695168] Pending
helpers_test.go:344: "headlamp-68456f997b-tmsmw" [3a27a456-c5bb-455f-9228-1176f6695168] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-tmsmw" [3a27a456-c5bb-455f-9228-1176f6695168] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003226486s
--- PASS: TestAddons/parallel/Headlamp (13.11s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-s8lhb" [644c5848-94ad-4c46-8338-3b2ad01b3182] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00435727s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-807933
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (17.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-807933 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-807933 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [251353f4-6bce-47db-85fa-8a7b8a6ff8ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [251353f4-6bce-47db-85fa-8a7b8a6ff8ca] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [251353f4-6bce-47db-85fa-8a7b8a6ff8ca] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00445037s
addons_test.go:891: (dbg) Run:  kubectl --context addons-807933 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 ssh "cat /opt/local-path-provisioner/pvc-c2976ab9-5fd0-4fd1-8996-109c08730171_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-807933 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-807933 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-807933 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (17.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2bwlq" [2f6875de-2696-4e7c-beb9-27efa228893f] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004983496s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-807933
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.69s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-pm765" [485d8c24-bc61-48a4-a4c0-b9b52de3ea25] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004147557s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-807933 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-807933 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (87.9s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-477107 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-477107 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m26.638728753s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-477107 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-477107 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-477107 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-477107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-477107
--- PASS: TestCertOptions (87.90s)

                                                
                                    
x
+
TestCertExpiration (277.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-355991 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-355991 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m3.236413047s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-355991 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-355991 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (32.92232819s)
helpers_test.go:175: Cleaning up "cert-expiration-355991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-355991
--- PASS: TestCertExpiration (277.16s)

                                                
                                    
x
+
TestForceSystemdFlag (53.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-199580 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-199580 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (51.955355118s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-199580 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-199580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-199580
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-199580: (1.017560537s)
--- PASS: TestForceSystemdFlag (53.17s)

                                                
                                    
x
+
TestForceSystemdEnv (48.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-735967 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-735967 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.141423368s)
helpers_test.go:175: Cleaning up "force-systemd-env-735967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-735967
--- PASS: TestForceSystemdEnv (48.92s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.00s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (5.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 stop: (2.283354135s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 stop: (1.873746985s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-141390 --log_dir /tmp/nospam-141390 stop: (1.247444008s)
--- PASS: TestErrorSpam/stop (5.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18925-3819/.minikube/files/etc/test/nested/copy/11162/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.11s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-044317 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0520 10:34:54.311128   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:34:54.316757   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:34:54.327184   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:34:54.347376   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:34:54.387639   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:34:54.467922   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:34:54.628345   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:34:54.948905   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:34:55.589813   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:34:56.870318   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:34:59.432128   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:35:04.553152   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:35:14.793381   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:35:35.274134   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-044317 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (56.106818961s)
--- PASS: TestFunctional/serial/StartWithProxy (56.11s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (55.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-044317 --alsologtostderr -v=8
E0520 10:36:16.235065   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-044317 --alsologtostderr -v=8: (55.049984916s)
functional_test.go:659: soft start took 55.050646549s for "functional-044317" cluster.
--- PASS: TestFunctional/serial/SoftStart (55.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-044317 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 cache add registry.k8s.io/pause:3.3: (1.058812084s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 cache add registry.k8s.io/pause:latest: (1.000129118s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-044317 /tmp/TestFunctionalserialCacheCmdcacheadd_local2589616248/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 cache add minikube-local-cache-test:functional-044317
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 cache add minikube-local-cache-test:functional-044317: (1.86825695s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 cache delete minikube-local-cache-test:functional-044317
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-044317
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044317 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (198.159549ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 kubectl -- --context functional-044317 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-044317 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-044317 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-044317 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.077599955s)
functional_test.go:757: restart took 33.077716418s for "functional-044317" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-044317 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 logs: (1.541093391s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 logs --file /tmp/TestFunctionalserialLogsFileCmd3449034669/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 logs --file /tmp/TestFunctionalserialLogsFileCmd3449034669/001/logs.txt: (1.55457764s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-044317 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-044317
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-044317: exit status 115 (262.45161ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.227:30961 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-044317 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044317 config get cpus: exit status 14 (45.828893ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044317 config get cpus: exit status 14 (39.363861ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-044317 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-044317 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20868: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-044317 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-044317 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.282582ms)

                                                
                                                
-- stdout --
	* [functional-044317] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:37:33.217324   20619 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:37:33.217622   20619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:37:33.217638   20619 out.go:304] Setting ErrFile to fd 2...
	I0520 10:37:33.217645   20619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:37:33.217951   20619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:37:33.218578   20619 out.go:298] Setting JSON to false
	I0520 10:37:33.219794   20619 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1196,"bootTime":1716200257,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 10:37:33.219874   20619 start.go:139] virtualization: kvm guest
	I0520 10:37:33.222228   20619 out.go:177] * [functional-044317] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 10:37:33.223621   20619 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:37:33.223660   20619 notify.go:220] Checking for updates...
	I0520 10:37:33.225015   20619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:37:33.226420   20619 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:37:33.227635   20619 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:37:33.228777   20619 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 10:37:33.229984   20619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:37:33.231641   20619 config.go:182] Loaded profile config "functional-044317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:37:33.232212   20619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:37:33.232265   20619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:37:33.247293   20619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44915
	I0520 10:37:33.247720   20619 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:37:33.248324   20619 main.go:141] libmachine: Using API Version  1
	I0520 10:37:33.248354   20619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:37:33.248771   20619 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:37:33.248962   20619 main.go:141] libmachine: (functional-044317) Calling .DriverName
	I0520 10:37:33.249227   20619 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:37:33.249532   20619 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:37:33.249576   20619 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:37:33.263114   20619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38413
	I0520 10:37:33.263470   20619 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:37:33.263895   20619 main.go:141] libmachine: Using API Version  1
	I0520 10:37:33.263926   20619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:37:33.264243   20619 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:37:33.264402   20619 main.go:141] libmachine: (functional-044317) Calling .DriverName
	I0520 10:37:33.296958   20619 out.go:177] * Using the kvm2 driver based on existing profile
	I0520 10:37:33.298190   20619 start.go:297] selected driver: kvm2
	I0520 10:37:33.298203   20619 start.go:901] validating driver "kvm2" against &{Name:functional-044317 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-044317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:37:33.298299   20619 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:37:33.300354   20619 out.go:177] 
	W0520 10:37:33.301491   20619 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0520 10:37:33.302646   20619 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-044317 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-044317 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-044317 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.421425ms)

                                                
                                                
-- stdout --
	* [functional-044317] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:37:33.482420   20705 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:37:33.482550   20705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:37:33.482562   20705 out.go:304] Setting ErrFile to fd 2...
	I0520 10:37:33.482569   20705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:37:33.482872   20705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 10:37:33.483353   20705 out.go:298] Setting JSON to false
	I0520 10:37:33.484309   20705 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1196,"bootTime":1716200257,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 10:37:33.484371   20705 start.go:139] virtualization: kvm guest
	I0520 10:37:33.486475   20705 out.go:177] * [functional-044317] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0520 10:37:33.487822   20705 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:37:33.489051   20705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:37:33.487835   20705 notify.go:220] Checking for updates...
	I0520 10:37:33.491539   20705 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 10:37:33.493068   20705 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 10:37:33.494450   20705 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 10:37:33.495717   20705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:37:33.497579   20705 config.go:182] Loaded profile config "functional-044317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 10:37:33.498144   20705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:37:33.498218   20705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:37:33.512769   20705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0520 10:37:33.513207   20705 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:37:33.513862   20705 main.go:141] libmachine: Using API Version  1
	I0520 10:37:33.513892   20705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:37:33.514264   20705 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:37:33.514465   20705 main.go:141] libmachine: (functional-044317) Calling .DriverName
	I0520 10:37:33.514734   20705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:37:33.515032   20705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 10:37:33.515076   20705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 10:37:33.529430   20705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I0520 10:37:33.529854   20705 main.go:141] libmachine: () Calling .GetVersion
	I0520 10:37:33.530272   20705 main.go:141] libmachine: Using API Version  1
	I0520 10:37:33.530292   20705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 10:37:33.530562   20705 main.go:141] libmachine: () Calling .GetMachineName
	I0520 10:37:33.530735   20705 main.go:141] libmachine: (functional-044317) Calling .DriverName
	I0520 10:37:33.562483   20705 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0520 10:37:33.563671   20705 start.go:297] selected driver: kvm2
	I0520 10:37:33.563682   20705 start.go:901] validating driver "kvm2" against &{Name:functional-044317 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18869/minikube-v1.33.1-1715594774-18869-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-044317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:37:33.563784   20705 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:37:33.565677   20705 out.go:177] 
	W0520 10:37:33.566764   20705 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0520 10:37:33.567947   20705 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-044317 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-044317 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-wqmm5" [3f668647-7de4-4675-b99e-815e7a5db0d1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-wqmm5" [3f668647-7de4-4675-b99e-815e7a5db0d1] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004538231s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.227:31870
functional_test.go:1671: http://192.168.39.227:31870: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-wqmm5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.227:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.227:31870
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3bc3d873-bf2f-4805-831b-46268f672806] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005081544s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-044317 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-044317 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-044317 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-044317 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4d38617c-6562-48e1-a234-88ab388a41ae] Pending
helpers_test.go:344: "sp-pod" [4d38617c-6562-48e1-a234-88ab388a41ae] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4d38617c-6562-48e1-a234-88ab388a41ae] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.00560894s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-044317 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-044317 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-044317 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b509fa55-4b5d-4d63-bdef-388c53177f84] Pending
helpers_test.go:344: "sp-pod" [b509fa55-4b5d-4d63-bdef-388c53177f84] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b509fa55-4b5d-4d63-bdef-388c53177f84] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.004461897s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-044317 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh -n functional-044317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 cp functional-044317:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3256632025/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh -n functional-044317 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh -n functional-044317 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-044317 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-zdhn5" [c7c09f1d-7938-4074-a87c-1e59d9949586] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-zdhn5" [c7c09f1d-7938-4074-a87c-1e59d9949586] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.00378635s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-044317 exec mysql-64454c8b5c-zdhn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-044317 exec mysql-64454c8b5c-zdhn5 -- mysql -ppassword -e "show databases;": exit status 1 (126.280744ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-044317 exec mysql-64454c8b5c-zdhn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-044317 exec mysql-64454c8b5c-zdhn5 -- mysql -ppassword -e "show databases;": exit status 1 (142.90738ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-044317 exec mysql-64454c8b5c-zdhn5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11162/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "sudo cat /etc/test/nested/copy/11162/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11162.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "sudo cat /etc/ssl/certs/11162.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11162.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "sudo cat /usr/share/ca-certificates/11162.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/111622.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "sudo cat /etc/ssl/certs/111622.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/111622.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "sudo cat /usr/share/ca-certificates/111622.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-044317 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044317 ssh "sudo systemctl is-active docker": exit status 1 (202.356116ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044317 ssh "sudo systemctl is-active containerd": exit status 1 (208.078835ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 version -o=json --components: (1.003654612s)
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-044317 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-044317
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-044317
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-044317 image ls --format short --alsologtostderr:
I0520 10:37:45.967219   21627 out.go:291] Setting OutFile to fd 1 ...
I0520 10:37:45.967481   21627 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:37:45.967491   21627 out.go:304] Setting ErrFile to fd 2...
I0520 10:37:45.967495   21627 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:37:45.967650   21627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
I0520 10:37:45.968157   21627 config.go:182] Loaded profile config "functional-044317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:37:45.968247   21627 config.go:182] Loaded profile config "functional-044317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:37:45.968579   21627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 10:37:45.968634   21627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 10:37:45.983322   21627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33143
I0520 10:37:45.983822   21627 main.go:141] libmachine: () Calling .GetVersion
I0520 10:37:45.984435   21627 main.go:141] libmachine: Using API Version  1
I0520 10:37:45.984462   21627 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 10:37:45.984826   21627 main.go:141] libmachine: () Calling .GetMachineName
I0520 10:37:45.985074   21627 main.go:141] libmachine: (functional-044317) Calling .GetState
I0520 10:37:45.986989   21627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 10:37:45.987045   21627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 10:37:46.001731   21627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33775
I0520 10:37:46.002139   21627 main.go:141] libmachine: () Calling .GetVersion
I0520 10:37:46.002578   21627 main.go:141] libmachine: Using API Version  1
I0520 10:37:46.002601   21627 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 10:37:46.002937   21627 main.go:141] libmachine: () Calling .GetMachineName
I0520 10:37:46.003153   21627 main.go:141] libmachine: (functional-044317) Calling .DriverName
I0520 10:37:46.003371   21627 ssh_runner.go:195] Run: systemctl --version
I0520 10:37:46.003393   21627 main.go:141] libmachine: (functional-044317) Calling .GetSSHHostname
I0520 10:37:46.006458   21627 main.go:141] libmachine: (functional-044317) DBG | domain functional-044317 has defined MAC address 52:54:00:5d:9b:05 in network mk-functional-044317
I0520 10:37:46.006926   21627 main.go:141] libmachine: (functional-044317) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9b:05", ip: ""} in network mk-functional-044317: {Iface:virbr1 ExpiryTime:2024-05-20 11:34:53 +0000 UTC Type:0 Mac:52:54:00:5d:9b:05 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-044317 Clientid:01:52:54:00:5d:9b:05}
I0520 10:37:46.006950   21627 main.go:141] libmachine: (functional-044317) DBG | domain functional-044317 has defined IP address 192.168.39.227 and MAC address 52:54:00:5d:9b:05 in network mk-functional-044317
I0520 10:37:46.007115   21627 main.go:141] libmachine: (functional-044317) Calling .GetSSHPort
I0520 10:37:46.007292   21627 main.go:141] libmachine: (functional-044317) Calling .GetSSHKeyPath
I0520 10:37:46.007507   21627 main.go:141] libmachine: (functional-044317) Calling .GetSSHUsername
I0520 10:37:46.007658   21627 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/functional-044317/id_rsa Username:docker}
I0520 10:37:46.118829   21627 ssh_runner.go:195] Run: sudo crictl images --output json
I0520 10:37:46.203905   21627 main.go:141] libmachine: Making call to close driver server
I0520 10:37:46.203922   21627 main.go:141] libmachine: (functional-044317) Calling .Close
I0520 10:37:46.204196   21627 main.go:141] libmachine: Successfully made call to close driver server
I0520 10:37:46.204259   21627 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 10:37:46.204271   21627 main.go:141] libmachine: Making call to close driver server
I0520 10:37:46.204281   21627 main.go:141] libmachine: (functional-044317) Calling .Close
I0520 10:37:46.204235   21627 main.go:141] libmachine: (functional-044317) DBG | Closing plugin on server side
I0520 10:37:46.204534   21627 main.go:141] libmachine: Successfully made call to close driver server
I0520 10:37:46.204558   21627 main.go:141] libmachine: (functional-044317) DBG | Closing plugin on server side
I0520 10:37:46.204567   21627 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-044317 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | e784f4560448b | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/kube-proxy              | v1.30.1            | 747097150317f | 85.9MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/google-containers/addon-resizer  | functional-044317  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-044317  | 92df169f281c1 | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.30.1            | 25a1387cdab82 | 112MB  |
| registry.k8s.io/kube-scheduler          | v1.30.1            | a52dc94f0a912 | 63MB   |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-044317  | 995144416ffdb | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.1            | 91be940803172 | 118MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-044317 image ls --format table --alsologtostderr:
I0520 10:37:50.883892   21802 out.go:291] Setting OutFile to fd 1 ...
I0520 10:37:50.883972   21802 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:37:50.883980   21802 out.go:304] Setting ErrFile to fd 2...
I0520 10:37:50.883984   21802 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:37:50.884165   21802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
I0520 10:37:50.884660   21802 config.go:182] Loaded profile config "functional-044317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:37:50.884748   21802 config.go:182] Loaded profile config "functional-044317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:37:50.885133   21802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 10:37:50.885179   21802 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 10:37:50.900137   21802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37237
I0520 10:37:50.900497   21802 main.go:141] libmachine: () Calling .GetVersion
I0520 10:37:50.901093   21802 main.go:141] libmachine: Using API Version  1
I0520 10:37:50.901118   21802 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 10:37:50.901502   21802 main.go:141] libmachine: () Calling .GetMachineName
I0520 10:37:50.901713   21802 main.go:141] libmachine: (functional-044317) Calling .GetState
I0520 10:37:50.903657   21802 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 10:37:50.903693   21802 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 10:37:50.918116   21802 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41963
I0520 10:37:50.918496   21802 main.go:141] libmachine: () Calling .GetVersion
I0520 10:37:50.919053   21802 main.go:141] libmachine: Using API Version  1
I0520 10:37:50.919086   21802 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 10:37:50.919407   21802 main.go:141] libmachine: () Calling .GetMachineName
I0520 10:37:50.919605   21802 main.go:141] libmachine: (functional-044317) Calling .DriverName
I0520 10:37:50.919815   21802 ssh_runner.go:195] Run: systemctl --version
I0520 10:37:50.919851   21802 main.go:141] libmachine: (functional-044317) Calling .GetSSHHostname
I0520 10:37:50.922341   21802 main.go:141] libmachine: (functional-044317) DBG | domain functional-044317 has defined MAC address 52:54:00:5d:9b:05 in network mk-functional-044317
I0520 10:37:50.922820   21802 main.go:141] libmachine: (functional-044317) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9b:05", ip: ""} in network mk-functional-044317: {Iface:virbr1 ExpiryTime:2024-05-20 11:34:53 +0000 UTC Type:0 Mac:52:54:00:5d:9b:05 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-044317 Clientid:01:52:54:00:5d:9b:05}
I0520 10:37:50.922858   21802 main.go:141] libmachine: (functional-044317) DBG | domain functional-044317 has defined IP address 192.168.39.227 and MAC address 52:54:00:5d:9b:05 in network mk-functional-044317
I0520 10:37:50.922945   21802 main.go:141] libmachine: (functional-044317) Calling .GetSSHPort
I0520 10:37:50.923100   21802 main.go:141] libmachine: (functional-044317) Calling .GetSSHKeyPath
I0520 10:37:50.923257   21802 main.go:141] libmachine: (functional-044317) Calling .GetSSHUsername
I0520 10:37:50.923404   21802 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/functional-044317/id_rsa Username:docker}
I0520 10:37:51.004766   21802 ssh_runner.go:195] Run: sudo crictl images --output json
I0520 10:37:51.048258   21802 main.go:141] libmachine: Making call to close driver server
I0520 10:37:51.048275   21802 main.go:141] libmachine: (functional-044317) Calling .Close
I0520 10:37:51.048563   21802 main.go:141] libmachine: Successfully made call to close driver server
I0520 10:37:51.048578   21802 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 10:37:51.048591   21802 main.go:141] libmachine: Making call to close driver server
I0520 10:37:51.048598   21802 main.go:141] libmachine: (functional-044317) Calling .Close
I0520 10:37:51.048879   21802 main.go:141] libmachine: (functional-044317) DBG | Closing plugin on server side
I0520 10:37:51.048872   21802 main.go:141] libmachine: Successfully made call to close driver server
I0520 10:37:51.048908   21802 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-044317 image ls --format json --alsologtostderr:
[{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52","registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"112170310"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52
d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags"
:["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"170eb5e0ee4bd21cc572db7302ea8e2028c921be4bf9762c3d2a2ba3a6a589be","repoDigests":["docker.io/library/42f7b8853c79859efb0b3aecede4550a9a83ebd2a6db4bb685799877b03669e6-tmp@sha256:9c0b09afe1d2ba0db17766466a0dc367f263b7b9077bc70424548f99d031a561"],"repoTags":[],"size":"1466018"},{"id":"e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070","rep
oDigests":["docker.io/library/nginx@sha256:a484819eb60211f5299034ac80f6a681b06f89e65866ce91f356ed7c72af059c","docker.io/library/nginx@sha256:e688fed0b0c7513a63364959e7d389c37ac8ecac7a6c6a31455eca2f5a71ab8b"],"repoTags":["docker.io/library/nginx:latest"],"size":"191805953"},{"id":"92df169f281c1b45a11fb02dfe7fec0372e39e0364ab12c20c9d32241ba025b6","repoDigests":["localhost/my-image@sha256:2218e394ccf7768ae34f0dedf490cfee99a718d20fc0c0d96370de7f6d645712"],"repoTags":["localhost/my-image:functional-044317"],"size":"1468600"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036","registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"63026504"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddc
aaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea","registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117601759"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b18
69c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-044317"],"size":"34114467"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube
/busybox:latest"],"size":"1462480"},{"id":"995144416ffdbcb768057fb28d442ea15c22d690398b146390fc300f77392038","repoDigests":["localhost/minikube-local-cache-test@sha256:7e03d2aaf45124ec999257653bcc6fb25eac11b87a1218dc17623744bf222c91"],"repoTags":["localhost/minikube-local-cache-test:functional-044317"],"size":"3328"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":["registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy
:v1.30.1"],"size":"85933465"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-044317 image ls --format json --alsologtostderr:
I0520 10:37:50.818472   21783 out.go:291] Setting OutFile to fd 1 ...
I0520 10:37:50.818690   21783 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:37:50.818698   21783 out.go:304] Setting ErrFile to fd 2...
I0520 10:37:50.818702   21783 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:37:50.818849   21783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
I0520 10:37:50.819314   21783 config.go:182] Loaded profile config "functional-044317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:37:50.819402   21783 config.go:182] Loaded profile config "functional-044317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:37:50.819734   21783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 10:37:50.819787   21783 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 10:37:50.834253   21783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34503
I0520 10:37:50.834685   21783 main.go:141] libmachine: () Calling .GetVersion
I0520 10:37:50.835221   21783 main.go:141] libmachine: Using API Version  1
I0520 10:37:50.835241   21783 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 10:37:50.835605   21783 main.go:141] libmachine: () Calling .GetMachineName
I0520 10:37:50.835777   21783 main.go:141] libmachine: (functional-044317) Calling .GetState
I0520 10:37:50.837589   21783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 10:37:50.837627   21783 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 10:37:50.858111   21783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
I0520 10:37:50.858553   21783 main.go:141] libmachine: () Calling .GetVersion
I0520 10:37:50.859078   21783 main.go:141] libmachine: Using API Version  1
I0520 10:37:50.859104   21783 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 10:37:50.859503   21783 main.go:141] libmachine: () Calling .GetMachineName
I0520 10:37:50.859719   21783 main.go:141] libmachine: (functional-044317) Calling .DriverName
I0520 10:37:50.859933   21783 ssh_runner.go:195] Run: systemctl --version
I0520 10:37:50.859958   21783 main.go:141] libmachine: (functional-044317) Calling .GetSSHHostname
I0520 10:37:50.862804   21783 main.go:141] libmachine: (functional-044317) DBG | domain functional-044317 has defined MAC address 52:54:00:5d:9b:05 in network mk-functional-044317
I0520 10:37:50.863208   21783 main.go:141] libmachine: (functional-044317) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9b:05", ip: ""} in network mk-functional-044317: {Iface:virbr1 ExpiryTime:2024-05-20 11:34:53 +0000 UTC Type:0 Mac:52:54:00:5d:9b:05 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-044317 Clientid:01:52:54:00:5d:9b:05}
I0520 10:37:50.863245   21783 main.go:141] libmachine: (functional-044317) DBG | domain functional-044317 has defined IP address 192.168.39.227 and MAC address 52:54:00:5d:9b:05 in network mk-functional-044317
I0520 10:37:50.863434   21783 main.go:141] libmachine: (functional-044317) Calling .GetSSHPort
I0520 10:37:50.863618   21783 main.go:141] libmachine: (functional-044317) Calling .GetSSHKeyPath
I0520 10:37:50.863783   21783 main.go:141] libmachine: (functional-044317) Calling .GetSSHUsername
I0520 10:37:50.863933   21783 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/functional-044317/id_rsa Username:docker}
I0520 10:37:50.949311   21783 ssh_runner.go:195] Run: sudo crictl images --output json
I0520 10:37:50.991820   21783 main.go:141] libmachine: Making call to close driver server
I0520 10:37:50.991837   21783 main.go:141] libmachine: (functional-044317) Calling .Close
I0520 10:37:50.992113   21783 main.go:141] libmachine: Successfully made call to close driver server
I0520 10:37:50.992131   21783 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 10:37:50.992142   21783 main.go:141] libmachine: Making call to close driver server
I0520 10:37:50.992150   21783 main.go:141] libmachine: (functional-044317) Calling .Close
I0520 10:37:50.992206   21783 main.go:141] libmachine: (functional-044317) DBG | Closing plugin on server side
I0520 10:37:50.992378   21783 main.go:141] libmachine: Successfully made call to close driver server
I0520 10:37:50.992389   21783 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 10:37:50.992407   21783 main.go:141] libmachine: (functional-044317) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-044317 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 995144416ffdbcb768057fb28d442ea15c22d690398b146390fc300f77392038
repoDigests:
- localhost/minikube-local-cache-test@sha256:7e03d2aaf45124ec999257653bcc6fb25eac11b87a1218dc17623744bf222c91
repoTags:
- localhost/minikube-local-cache-test:functional-044317
size: "3328"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
- registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "112170310"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
- registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "63026504"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-044317
size: "34114467"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "85933465"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
- registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117601759"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070
repoDigests:
- docker.io/library/nginx@sha256:a484819eb60211f5299034ac80f6a681b06f89e65866ce91f356ed7c72af059c
- docker.io/library/nginx@sha256:e688fed0b0c7513a63364959e7d389c37ac8ecac7a6c6a31455eca2f5a71ab8b
repoTags:
- docker.io/library/nginx:latest
size: "191805953"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-044317 image ls --format yaml --alsologtostderr:
I0520 10:37:46.258838   21650 out.go:291] Setting OutFile to fd 1 ...
I0520 10:37:46.259172   21650 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:37:46.259188   21650 out.go:304] Setting ErrFile to fd 2...
I0520 10:37:46.259194   21650 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:37:46.259506   21650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
I0520 10:37:46.260316   21650 config.go:182] Loaded profile config "functional-044317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:37:46.260471   21650 config.go:182] Loaded profile config "functional-044317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:37:46.261070   21650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 10:37:46.261132   21650 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 10:37:46.276036   21650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46591
I0520 10:37:46.276518   21650 main.go:141] libmachine: () Calling .GetVersion
I0520 10:37:46.277179   21650 main.go:141] libmachine: Using API Version  1
I0520 10:37:46.277207   21650 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 10:37:46.277571   21650 main.go:141] libmachine: () Calling .GetMachineName
I0520 10:37:46.277796   21650 main.go:141] libmachine: (functional-044317) Calling .GetState
I0520 10:37:46.279651   21650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 10:37:46.279697   21650 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 10:37:46.294410   21650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
I0520 10:37:46.294828   21650 main.go:141] libmachine: () Calling .GetVersion
I0520 10:37:46.295244   21650 main.go:141] libmachine: Using API Version  1
I0520 10:37:46.295265   21650 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 10:37:46.295700   21650 main.go:141] libmachine: () Calling .GetMachineName
I0520 10:37:46.295963   21650 main.go:141] libmachine: (functional-044317) Calling .DriverName
I0520 10:37:46.296230   21650 ssh_runner.go:195] Run: systemctl --version
I0520 10:37:46.296257   21650 main.go:141] libmachine: (functional-044317) Calling .GetSSHHostname
I0520 10:37:46.299184   21650 main.go:141] libmachine: (functional-044317) DBG | domain functional-044317 has defined MAC address 52:54:00:5d:9b:05 in network mk-functional-044317
I0520 10:37:46.299632   21650 main.go:141] libmachine: (functional-044317) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9b:05", ip: ""} in network mk-functional-044317: {Iface:virbr1 ExpiryTime:2024-05-20 11:34:53 +0000 UTC Type:0 Mac:52:54:00:5d:9b:05 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-044317 Clientid:01:52:54:00:5d:9b:05}
I0520 10:37:46.299675   21650 main.go:141] libmachine: (functional-044317) DBG | domain functional-044317 has defined IP address 192.168.39.227 and MAC address 52:54:00:5d:9b:05 in network mk-functional-044317
I0520 10:37:46.299905   21650 main.go:141] libmachine: (functional-044317) Calling .GetSSHPort
I0520 10:37:46.300110   21650 main.go:141] libmachine: (functional-044317) Calling .GetSSHKeyPath
I0520 10:37:46.300278   21650 main.go:141] libmachine: (functional-044317) Calling .GetSSHUsername
I0520 10:37:46.300437   21650 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/functional-044317/id_rsa Username:docker}
I0520 10:37:46.383431   21650 ssh_runner.go:195] Run: sudo crictl images --output json
I0520 10:37:46.428618   21650 main.go:141] libmachine: Making call to close driver server
I0520 10:37:46.428630   21650 main.go:141] libmachine: (functional-044317) Calling .Close
I0520 10:37:46.428907   21650 main.go:141] libmachine: Successfully made call to close driver server
I0520 10:37:46.428936   21650 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 10:37:46.428935   21650 main.go:141] libmachine: (functional-044317) DBG | Closing plugin on server side
I0520 10:37:46.428945   21650 main.go:141] libmachine: Making call to close driver server
I0520 10:37:46.428955   21650 main.go:141] libmachine: (functional-044317) Calling .Close
I0520 10:37:46.429155   21650 main.go:141] libmachine: (functional-044317) DBG | Closing plugin on server side
I0520 10:37:46.429248   21650 main.go:141] libmachine: Successfully made call to close driver server
I0520 10:37:46.429275   21650 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044317 ssh pgrep buildkitd: exit status 1 (172.307749ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image build -t localhost/my-image:functional-044317 testdata/build --alsologtostderr
2024/05/20 10:37:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 image build -t localhost/my-image:functional-044317 testdata/build --alsologtostderr: (3.962789441s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-044317 image build -t localhost/my-image:functional-044317 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 170eb5e0ee4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-044317
--> 92df169f281
Successfully tagged localhost/my-image:functional-044317
92df169f281c1b45a11fb02dfe7fec0372e39e0364ab12c20c9d32241ba025b6
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-044317 image build -t localhost/my-image:functional-044317 testdata/build --alsologtostderr:
I0520 10:37:46.646723   21704 out.go:291] Setting OutFile to fd 1 ...
I0520 10:37:46.646862   21704 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:37:46.646871   21704 out.go:304] Setting ErrFile to fd 2...
I0520 10:37:46.646876   21704 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:37:46.647055   21704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
I0520 10:37:46.647613   21704 config.go:182] Loaded profile config "functional-044317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:37:46.648143   21704 config.go:182] Loaded profile config "functional-044317": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0520 10:37:46.648500   21704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 10:37:46.648555   21704 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 10:37:46.664653   21704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
I0520 10:37:46.665136   21704 main.go:141] libmachine: () Calling .GetVersion
I0520 10:37:46.665625   21704 main.go:141] libmachine: Using API Version  1
I0520 10:37:46.665650   21704 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 10:37:46.666019   21704 main.go:141] libmachine: () Calling .GetMachineName
I0520 10:37:46.666212   21704 main.go:141] libmachine: (functional-044317) Calling .GetState
I0520 10:37:46.667971   21704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0520 10:37:46.668003   21704 main.go:141] libmachine: Launching plugin server for driver kvm2
I0520 10:37:46.682567   21704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
I0520 10:37:46.682943   21704 main.go:141] libmachine: () Calling .GetVersion
I0520 10:37:46.683359   21704 main.go:141] libmachine: Using API Version  1
I0520 10:37:46.683380   21704 main.go:141] libmachine: () Calling .SetConfigRaw
I0520 10:37:46.683732   21704 main.go:141] libmachine: () Calling .GetMachineName
I0520 10:37:46.683916   21704 main.go:141] libmachine: (functional-044317) Calling .DriverName
I0520 10:37:46.684114   21704 ssh_runner.go:195] Run: systemctl --version
I0520 10:37:46.684139   21704 main.go:141] libmachine: (functional-044317) Calling .GetSSHHostname
I0520 10:37:46.686807   21704 main.go:141] libmachine: (functional-044317) DBG | domain functional-044317 has defined MAC address 52:54:00:5d:9b:05 in network mk-functional-044317
I0520 10:37:46.687187   21704 main.go:141] libmachine: (functional-044317) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9b:05", ip: ""} in network mk-functional-044317: {Iface:virbr1 ExpiryTime:2024-05-20 11:34:53 +0000 UTC Type:0 Mac:52:54:00:5d:9b:05 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-044317 Clientid:01:52:54:00:5d:9b:05}
I0520 10:37:46.687221   21704 main.go:141] libmachine: (functional-044317) DBG | domain functional-044317 has defined IP address 192.168.39.227 and MAC address 52:54:00:5d:9b:05 in network mk-functional-044317
I0520 10:37:46.687325   21704 main.go:141] libmachine: (functional-044317) Calling .GetSSHPort
I0520 10:37:46.687495   21704 main.go:141] libmachine: (functional-044317) Calling .GetSSHKeyPath
I0520 10:37:46.687659   21704 main.go:141] libmachine: (functional-044317) Calling .GetSSHUsername
I0520 10:37:46.687773   21704 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/functional-044317/id_rsa Username:docker}
I0520 10:37:46.768013   21704 build_images.go:161] Building image from path: /tmp/build.1954172435.tar
I0520 10:37:46.768095   21704 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0520 10:37:46.780665   21704 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1954172435.tar
I0520 10:37:46.785602   21704 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1954172435.tar: stat -c "%s %y" /var/lib/minikube/build/build.1954172435.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1954172435.tar': No such file or directory
I0520 10:37:46.785630   21704 ssh_runner.go:362] scp /tmp/build.1954172435.tar --> /var/lib/minikube/build/build.1954172435.tar (3072 bytes)
I0520 10:37:46.813063   21704 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1954172435
I0520 10:37:46.823336   21704 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1954172435 -xf /var/lib/minikube/build/build.1954172435.tar
I0520 10:37:46.846738   21704 crio.go:315] Building image: /var/lib/minikube/build/build.1954172435
I0520 10:37:46.846824   21704 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-044317 /var/lib/minikube/build/build.1954172435 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0520 10:37:50.540914   21704 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-044317 /var/lib/minikube/build/build.1954172435 --cgroup-manager=cgroupfs: (3.694054872s)
I0520 10:37:50.540986   21704 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1954172435
I0520 10:37:50.553631   21704 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1954172435.tar
I0520 10:37:50.566138   21704 build_images.go:217] Built localhost/my-image:functional-044317 from /tmp/build.1954172435.tar
I0520 10:37:50.566162   21704 build_images.go:133] succeeded building to: functional-044317
I0520 10:37:50.566168   21704 build_images.go:134] failed building to: 
I0520 10:37:50.566211   21704 main.go:141] libmachine: Making call to close driver server
I0520 10:37:50.566225   21704 main.go:141] libmachine: (functional-044317) Calling .Close
I0520 10:37:50.566464   21704 main.go:141] libmachine: (functional-044317) DBG | Closing plugin on server side
I0520 10:37:50.566504   21704 main.go:141] libmachine: Successfully made call to close driver server
I0520 10:37:50.566525   21704 main.go:141] libmachine: Making call to close connection to plugin binary
I0520 10:37:50.566539   21704 main.go:141] libmachine: Making call to close driver server
I0520 10:37:50.566547   21704 main.go:141] libmachine: (functional-044317) Calling .Close
I0520 10:37:50.566767   21704 main.go:141] libmachine: Successfully made call to close driver server
I0520 10:37:50.566785   21704 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.201621026s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-044317
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-044317 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-044317 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-ct6rk" [c52bf27d-f3cf-4db9-8f3a-bac7177b6499] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-ct6rk" [c52bf27d-f3cf-4db9-8f3a-bac7177b6499] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004417604s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image load --daemon gcr.io/google-containers/addon-resizer:functional-044317 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 image load --daemon gcr.io/google-containers/addon-resizer:functional-044317 --alsologtostderr: (3.831882933s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image load --daemon gcr.io/google-containers/addon-resizer:functional-044317 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 image load --daemon gcr.io/google-containers/addon-resizer:functional-044317 --alsologtostderr: (2.43804162s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.094258968s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-044317
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image load --daemon gcr.io/google-containers/addon-resizer:functional-044317 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 image load --daemon gcr.io/google-containers/addon-resizer:functional-044317 --alsologtostderr: (6.680622381s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 service list -o json
functional_test.go:1490: Took "328.599359ms" to run "out/minikube-linux-amd64 -p functional-044317 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.227:30444
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.227:30444
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "294.396675ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "40.041944ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-044317 /tmp/TestFunctionalparallelMountCmdany-port2031474890/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1716201452604810271" to /tmp/TestFunctionalparallelMountCmdany-port2031474890/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1716201452604810271" to /tmp/TestFunctionalparallelMountCmdany-port2031474890/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1716201452604810271" to /tmp/TestFunctionalparallelMountCmdany-port2031474890/001/test-1716201452604810271
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044317 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (241.162759ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 20 10:37 created-by-test
-rw-r--r-- 1 docker docker 24 May 20 10:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 20 10:37 test-1716201452604810271
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh cat /mount-9p/test-1716201452604810271
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-044317 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2927d7b5-f8c0-4aad-a488-322fcbd3e6f9] Pending
helpers_test.go:344: "busybox-mount" [2927d7b5-f8c0-4aad-a488-322fcbd3e6f9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2927d7b5-f8c0-4aad-a488-322fcbd3e6f9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2927d7b5-f8c0-4aad-a488-322fcbd3e6f9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004549427s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-044317 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-044317 /tmp/TestFunctionalparallelMountCmdany-port2031474890/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "225.718683ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "45.448134ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image save gcr.io/google-containers/addon-resizer:functional-044317 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
E0520 10:37:38.155573   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 image save gcr.io/google-containers/addon-resizer:functional-044317 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.377087201s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image rm gcr.io/google-containers/addon-resizer:functional-044317 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.437739467s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-044317
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 image save --daemon gcr.io/google-containers/addon-resizer:functional-044317 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-044317 image save --daemon gcr.io/google-containers/addon-resizer:functional-044317 --alsologtostderr: (1.322524344s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-044317
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-044317 /tmp/TestFunctionalparallelMountCmdspecific-port579184829/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044317 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (268.587512ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-044317 /tmp/TestFunctionalparallelMountCmdspecific-port579184829/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044317 ssh "sudo umount -f /mount-9p": exit status 1 (218.262362ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-044317 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-044317 /tmp/TestFunctionalparallelMountCmdspecific-port579184829/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-044317 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1267052607/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-044317 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1267052607/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-044317 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1267052607/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-044317 ssh "findmnt -T" /mount1: exit status 1 (329.780341ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-044317 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-044317 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1267052607/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-044317 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1267052607/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-044317 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1267052607/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-044317 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-044317
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-044317
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-044317
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (255.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-570850 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0520 10:39:54.310195   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:40:21.996661   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 10:42:19.191960   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:42:19.197240   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:42:19.207477   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:42:19.227770   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:42:19.268037   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:42:19.348375   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:42:19.509238   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:42:19.829474   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:42:20.470520   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:42:21.751021   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:42:24.311869   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-570850 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m15.052476578s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (255.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- rollout status deployment/busybox
E0520 10:42:29.432530   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-570850 -- rollout status deployment/busybox: (5.688975443s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-q9r5v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-wpv4q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-x2wtn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-q9r5v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-wpv4q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-x2wtn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-q9r5v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-wpv4q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-x2wtn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-q9r5v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-q9r5v -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-wpv4q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-wpv4q -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-x2wtn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-570850 -- exec busybox-fc5497c4f-x2wtn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-570850 -v=7 --alsologtostderr
E0520 10:42:39.673320   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:43:00.153709   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-570850 -v=7 --alsologtostderr: (49.124428247s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (50.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-570850 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp testdata/cp-test.txt ha-570850:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2002648180/001/cp-test_ha-570850.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850:/home/docker/cp-test.txt ha-570850-m02:/home/docker/cp-test_ha-570850_ha-570850-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m02 "sudo cat /home/docker/cp-test_ha-570850_ha-570850-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850:/home/docker/cp-test.txt ha-570850-m03:/home/docker/cp-test_ha-570850_ha-570850-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m03 "sudo cat /home/docker/cp-test_ha-570850_ha-570850-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850:/home/docker/cp-test.txt ha-570850-m04:/home/docker/cp-test_ha-570850_ha-570850-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m04 "sudo cat /home/docker/cp-test_ha-570850_ha-570850-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp testdata/cp-test.txt ha-570850-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2002648180/001/cp-test_ha-570850-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850-m02:/home/docker/cp-test.txt ha-570850:/home/docker/cp-test_ha-570850-m02_ha-570850.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850 "sudo cat /home/docker/cp-test_ha-570850-m02_ha-570850.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850-m02:/home/docker/cp-test.txt ha-570850-m03:/home/docker/cp-test_ha-570850-m02_ha-570850-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m03 "sudo cat /home/docker/cp-test_ha-570850-m02_ha-570850-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850-m02:/home/docker/cp-test.txt ha-570850-m04:/home/docker/cp-test_ha-570850-m02_ha-570850-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m04 "sudo cat /home/docker/cp-test_ha-570850-m02_ha-570850-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp testdata/cp-test.txt ha-570850-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2002648180/001/cp-test_ha-570850-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt ha-570850:/home/docker/cp-test_ha-570850-m03_ha-570850.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850 "sudo cat /home/docker/cp-test_ha-570850-m03_ha-570850.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt ha-570850-m02:/home/docker/cp-test_ha-570850-m03_ha-570850-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m02 "sudo cat /home/docker/cp-test_ha-570850-m03_ha-570850-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850-m03:/home/docker/cp-test.txt ha-570850-m04:/home/docker/cp-test_ha-570850-m03_ha-570850-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m04 "sudo cat /home/docker/cp-test_ha-570850-m03_ha-570850-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp testdata/cp-test.txt ha-570850-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2002648180/001/cp-test_ha-570850-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt ha-570850:/home/docker/cp-test_ha-570850-m04_ha-570850.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850 "sudo cat /home/docker/cp-test_ha-570850-m04_ha-570850.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt ha-570850-m02:/home/docker/cp-test_ha-570850-m04_ha-570850-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m02 "sudo cat /home/docker/cp-test_ha-570850-m04_ha-570850-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 cp ha-570850-m04:/home/docker/cp-test.txt ha-570850-m03:/home/docker/cp-test_ha-570850-m04_ha-570850-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 ssh -n ha-570850-m03 "sudo cat /home/docker/cp-test_ha-570850-m04_ha-570850-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.49007694s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (326.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-570850 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0520 10:57:19.192809   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:58:42.236994   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 10:59:54.309914   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-570850 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m25.686867729s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (326.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-570850 --control-plane -v=7 --alsologtostderr
E0520 11:02:19.191683   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-570850 --control-plane -v=7 --alsologtostderr: (1m15.896812127s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-570850 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.43s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-242523 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-242523 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.430908731s)
--- PASS: TestJSONOutput/start/Command (58.43s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-242523 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-242523 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-242523 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-242523 --output=json --user=testUser: (7.338523316s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-419610 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-419610 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (56.429055ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"757d0211-e4cb-43ec-9227-10beab202717","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-419610] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fabac561-3570-4928-a763-01dfd0db98d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18925"}}
	{"specversion":"1.0","id":"fd895ef5-0207-4f7f-a22f-ef0aaeeb8889","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5cc1972b-ee80-4d76-9292-9c1946e35261","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig"}}
	{"specversion":"1.0","id":"711cd896-df8f-4958-948c-fd7b364cbdfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube"}}
	{"specversion":"1.0","id":"b2a426cb-a2fb-4eb8-a86e-1790d5726928","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a328074d-b73d-4b9c-bca9-c7e99dca4ae3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"06afebea-0163-4b3d-ab6b-b311806ef1be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-419610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-419610
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-224925 --driver=kvm2  --container-runtime=crio
E0520 11:04:54.310717   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-224925 --driver=kvm2  --container-runtime=crio: (43.823147397s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-228316 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-228316 --driver=kvm2  --container-runtime=crio: (42.878849345s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-224925
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-228316
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-228316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-228316
helpers_test.go:175: Cleaning up "first-224925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-224925
--- PASS: TestMinikubeProfile (89.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-622154 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-622154 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.674367145s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-622154 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-622154 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-634658 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-634658 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.089518912s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-634658 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-634658 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-622154 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-634658 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-634658 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.76s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-634658
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-634658: (1.756359213s)
--- PASS: TestMountStart/serial/Stop (1.76s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-634658
E0520 11:07:19.192868   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-634658: (19.758127794s)
--- PASS: TestMountStart/serial/RestartStopped (20.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-634658 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-634658 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (123.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-563238 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0520 11:07:57.357256   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-563238 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m3.314938445s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (123.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-563238 -- rollout status deployment/busybox: (4.166796676s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- exec busybox-fc5497c4f-94vv2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- exec busybox-fc5497c4f-twghk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- exec busybox-fc5497c4f-94vv2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- exec busybox-fc5497c4f-twghk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- exec busybox-fc5497c4f-94vv2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- exec busybox-fc5497c4f-twghk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- exec busybox-fc5497c4f-94vv2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- exec busybox-fc5497c4f-94vv2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- exec busybox-fc5497c4f-twghk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-563238 -- exec busybox-fc5497c4f-twghk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-563238 -v 3 --alsologtostderr
E0520 11:09:54.309960   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-563238 -v 3 --alsologtostderr: (40.540213179s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-563238 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 cp testdata/cp-test.txt multinode-563238:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 cp multinode-563238:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2129143398/001/cp-test_multinode-563238.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 cp multinode-563238:/home/docker/cp-test.txt multinode-563238-m02:/home/docker/cp-test_multinode-563238_multinode-563238-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238-m02 "sudo cat /home/docker/cp-test_multinode-563238_multinode-563238-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 cp multinode-563238:/home/docker/cp-test.txt multinode-563238-m03:/home/docker/cp-test_multinode-563238_multinode-563238-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238-m03 "sudo cat /home/docker/cp-test_multinode-563238_multinode-563238-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 cp testdata/cp-test.txt multinode-563238-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 cp multinode-563238-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2129143398/001/cp-test_multinode-563238-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 cp multinode-563238-m02:/home/docker/cp-test.txt multinode-563238:/home/docker/cp-test_multinode-563238-m02_multinode-563238.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238 "sudo cat /home/docker/cp-test_multinode-563238-m02_multinode-563238.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 cp multinode-563238-m02:/home/docker/cp-test.txt multinode-563238-m03:/home/docker/cp-test_multinode-563238-m02_multinode-563238-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238-m03 "sudo cat /home/docker/cp-test_multinode-563238-m02_multinode-563238-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 cp testdata/cp-test.txt multinode-563238-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 cp multinode-563238-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2129143398/001/cp-test_multinode-563238-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 cp multinode-563238-m03:/home/docker/cp-test.txt multinode-563238:/home/docker/cp-test_multinode-563238-m03_multinode-563238.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238 "sudo cat /home/docker/cp-test_multinode-563238-m03_multinode-563238.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 cp multinode-563238-m03:/home/docker/cp-test.txt multinode-563238-m02:/home/docker/cp-test_multinode-563238-m03_multinode-563238-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 ssh -n multinode-563238-m02 "sudo cat /home/docker/cp-test_multinode-563238-m03_multinode-563238-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-563238 node stop m03: (1.459291127s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-563238 status: exit status 7 (401.691596ms)

                                                
                                                
-- stdout --
	multinode-563238
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-563238-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-563238-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-563238 status --alsologtostderr: exit status 7 (409.385733ms)

                                                
                                                
-- stdout --
	multinode-563238
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-563238-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-563238-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:10:35.515448   39510 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:10:35.515686   39510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:10:35.515696   39510 out.go:304] Setting ErrFile to fd 2...
	I0520 11:10:35.515700   39510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:10:35.515882   39510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:10:35.516037   39510 out.go:298] Setting JSON to false
	I0520 11:10:35.516062   39510 mustload.go:65] Loading cluster: multinode-563238
	I0520 11:10:35.516191   39510 notify.go:220] Checking for updates...
	I0520 11:10:35.516542   39510 config.go:182] Loaded profile config "multinode-563238": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:10:35.516564   39510 status.go:255] checking status of multinode-563238 ...
	I0520 11:10:35.517068   39510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:10:35.517140   39510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:10:35.534953   39510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0520 11:10:35.535421   39510 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:10:35.535987   39510 main.go:141] libmachine: Using API Version  1
	I0520 11:10:35.536007   39510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:10:35.536286   39510 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:10:35.536514   39510 main.go:141] libmachine: (multinode-563238) Calling .GetState
	I0520 11:10:35.538108   39510 status.go:330] multinode-563238 host status = "Running" (err=<nil>)
	I0520 11:10:35.538124   39510 host.go:66] Checking if "multinode-563238" exists ...
	I0520 11:10:35.538441   39510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:10:35.538480   39510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:10:35.553182   39510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38119
	I0520 11:10:35.553539   39510 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:10:35.553934   39510 main.go:141] libmachine: Using API Version  1
	I0520 11:10:35.553951   39510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:10:35.554232   39510 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:10:35.554403   39510 main.go:141] libmachine: (multinode-563238) Calling .GetIP
	I0520 11:10:35.556851   39510 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:10:35.557177   39510 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:10:35.557217   39510 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:10:35.557304   39510 host.go:66] Checking if "multinode-563238" exists ...
	I0520 11:10:35.557575   39510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:10:35.557606   39510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:10:35.571662   39510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40563
	I0520 11:10:35.571956   39510 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:10:35.572310   39510 main.go:141] libmachine: Using API Version  1
	I0520 11:10:35.572334   39510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:10:35.572614   39510 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:10:35.572784   39510 main.go:141] libmachine: (multinode-563238) Calling .DriverName
	I0520 11:10:35.572963   39510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 11:10:35.572996   39510 main.go:141] libmachine: (multinode-563238) Calling .GetSSHHostname
	I0520 11:10:35.575356   39510 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:10:35.575722   39510 main.go:141] libmachine: (multinode-563238) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:b5:89", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:07:49 +0000 UTC Type:0 Mac:52:54:00:21:b5:89 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-563238 Clientid:01:52:54:00:21:b5:89}
	I0520 11:10:35.575746   39510 main.go:141] libmachine: (multinode-563238) DBG | domain multinode-563238 has defined IP address 192.168.39.145 and MAC address 52:54:00:21:b5:89 in network mk-multinode-563238
	I0520 11:10:35.575870   39510 main.go:141] libmachine: (multinode-563238) Calling .GetSSHPort
	I0520 11:10:35.576051   39510 main.go:141] libmachine: (multinode-563238) Calling .GetSSHKeyPath
	I0520 11:10:35.576196   39510 main.go:141] libmachine: (multinode-563238) Calling .GetSSHUsername
	I0520 11:10:35.576346   39510 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/multinode-563238/id_rsa Username:docker}
	I0520 11:10:35.656489   39510 ssh_runner.go:195] Run: systemctl --version
	I0520 11:10:35.663013   39510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:10:35.678855   39510 kubeconfig.go:125] found "multinode-563238" server: "https://192.168.39.145:8443"
	I0520 11:10:35.678886   39510 api_server.go:166] Checking apiserver status ...
	I0520 11:10:35.678916   39510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:10:35.693445   39510 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1159/cgroup
	W0520 11:10:35.705395   39510 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1159/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:10:35.705442   39510 ssh_runner.go:195] Run: ls
	I0520 11:10:35.710429   39510 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I0520 11:10:35.714462   39510 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I0520 11:10:35.714482   39510 status.go:422] multinode-563238 apiserver status = Running (err=<nil>)
	I0520 11:10:35.714494   39510 status.go:257] multinode-563238 status: &{Name:multinode-563238 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 11:10:35.714513   39510 status.go:255] checking status of multinode-563238-m02 ...
	I0520 11:10:35.714781   39510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:10:35.714827   39510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:10:35.729837   39510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0520 11:10:35.730276   39510 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:10:35.730810   39510 main.go:141] libmachine: Using API Version  1
	I0520 11:10:35.730832   39510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:10:35.731150   39510 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:10:35.731308   39510 main.go:141] libmachine: (multinode-563238-m02) Calling .GetState
	I0520 11:10:35.732700   39510 status.go:330] multinode-563238-m02 host status = "Running" (err=<nil>)
	I0520 11:10:35.732720   39510 host.go:66] Checking if "multinode-563238-m02" exists ...
	I0520 11:10:35.733001   39510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:10:35.733032   39510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:10:35.747837   39510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39479
	I0520 11:10:35.748152   39510 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:10:35.748541   39510 main.go:141] libmachine: Using API Version  1
	I0520 11:10:35.748557   39510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:10:35.748788   39510 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:10:35.748975   39510 main.go:141] libmachine: (multinode-563238-m02) Calling .GetIP
	I0520 11:10:35.751693   39510 main.go:141] libmachine: (multinode-563238-m02) DBG | domain multinode-563238-m02 has defined MAC address 52:54:00:7e:dd:75 in network mk-multinode-563238
	I0520 11:10:35.752120   39510 main.go:141] libmachine: (multinode-563238-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:dd:75", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:09:15 +0000 UTC Type:0 Mac:52:54:00:7e:dd:75 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-563238-m02 Clientid:01:52:54:00:7e:dd:75}
	I0520 11:10:35.752146   39510 main.go:141] libmachine: (multinode-563238-m02) DBG | domain multinode-563238-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:7e:dd:75 in network mk-multinode-563238
	I0520 11:10:35.752260   39510 host.go:66] Checking if "multinode-563238-m02" exists ...
	I0520 11:10:35.752541   39510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:10:35.752573   39510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:10:35.766572   39510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35225
	I0520 11:10:35.766911   39510 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:10:35.767284   39510 main.go:141] libmachine: Using API Version  1
	I0520 11:10:35.767304   39510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:10:35.767568   39510 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:10:35.767743   39510 main.go:141] libmachine: (multinode-563238-m02) Calling .DriverName
	I0520 11:10:35.767927   39510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 11:10:35.767948   39510 main.go:141] libmachine: (multinode-563238-m02) Calling .GetSSHHostname
	I0520 11:10:35.770321   39510 main.go:141] libmachine: (multinode-563238-m02) DBG | domain multinode-563238-m02 has defined MAC address 52:54:00:7e:dd:75 in network mk-multinode-563238
	I0520 11:10:35.770736   39510 main.go:141] libmachine: (multinode-563238-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:dd:75", ip: ""} in network mk-multinode-563238: {Iface:virbr1 ExpiryTime:2024-05-20 12:09:15 +0000 UTC Type:0 Mac:52:54:00:7e:dd:75 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-563238-m02 Clientid:01:52:54:00:7e:dd:75}
	I0520 11:10:35.770772   39510 main.go:141] libmachine: (multinode-563238-m02) DBG | domain multinode-563238-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:7e:dd:75 in network mk-multinode-563238
	I0520 11:10:35.770882   39510 main.go:141] libmachine: (multinode-563238-m02) Calling .GetSSHPort
	I0520 11:10:35.771051   39510 main.go:141] libmachine: (multinode-563238-m02) Calling .GetSSHKeyPath
	I0520 11:10:35.771201   39510 main.go:141] libmachine: (multinode-563238-m02) Calling .GetSSHUsername
	I0520 11:10:35.771321   39510 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18925-3819/.minikube/machines/multinode-563238-m02/id_rsa Username:docker}
	I0520 11:10:35.851865   39510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 11:10:35.865681   39510 status.go:257] multinode-563238-m02 status: &{Name:multinode-563238-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0520 11:10:35.865720   39510 status.go:255] checking status of multinode-563238-m03 ...
	I0520 11:10:35.866072   39510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0520 11:10:35.866108   39510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0520 11:10:35.882830   39510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44711
	I0520 11:10:35.883266   39510 main.go:141] libmachine: () Calling .GetVersion
	I0520 11:10:35.883730   39510 main.go:141] libmachine: Using API Version  1
	I0520 11:10:35.883751   39510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0520 11:10:35.884048   39510 main.go:141] libmachine: () Calling .GetMachineName
	I0520 11:10:35.884220   39510 main.go:141] libmachine: (multinode-563238-m03) Calling .GetState
	I0520 11:10:35.885626   39510 status.go:330] multinode-563238-m03 host status = "Stopped" (err=<nil>)
	I0520 11:10:35.885642   39510 status.go:343] host is not running, skipping remaining checks
	I0520 11:10:35.885650   39510 status.go:257] multinode-563238-m03 status: &{Name:multinode-563238-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-563238 node start m03 -v=7 --alsologtostderr: (28.030080523s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-563238 node delete m03: (1.561195961s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-563238 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0520 11:19:54.310324   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-563238 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m57.561900524s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-563238 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.07s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-563238
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-563238-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-563238-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (55.397945ms)

                                                
                                                
-- stdout --
	* [multinode-563238-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-563238-m02' is duplicated with machine name 'multinode-563238-m02' in profile 'multinode-563238'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-563238-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-563238-m03 --driver=kvm2  --container-runtime=crio: (40.9909536s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-563238
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-563238: exit status 80 (196.296204ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-563238 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-563238-m03 already exists in multinode-563238-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-563238-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.23s)

                                                
                                    
x
+
TestScheduledStopUnix (113.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-423233 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-423233 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.147051826s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-423233 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-423233 -n scheduled-stop-423233
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-423233 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-423233 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-423233 -n scheduled-stop-423233
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-423233
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-423233 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0520 11:29:54.310748   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-423233
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-423233: exit status 7 (63.759253ms)

                                                
                                                
-- stdout --
	scheduled-stop-423233
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-423233 -n scheduled-stop-423233
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-423233 -n scheduled-stop-423233: exit status 7 (54.858159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-423233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-423233
--- PASS: TestScheduledStopUnix (113.64s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (188.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1619321647 start -p running-upgrade-153883 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1619321647 start -p running-upgrade-153883 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m38.52043315s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-153883 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0520 11:32:02.237634   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-153883 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.50130088s)
helpers_test.go:175: Cleaning up "running-upgrade-153883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-153883
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-153883: (1.164701114s)
--- PASS: TestRunningBinaryUpgrade (188.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (173.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1627196670 start -p stopped-upgrade-074146 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1627196670 start -p stopped-upgrade-074146 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.154467461s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1627196670 -p stopped-upgrade-074146 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1627196670 -p stopped-upgrade-074146 stop: (3.309675858s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-074146 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0520 11:32:19.192024   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-074146 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.292918034s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (173.76s)

                                                
                                    
x
+
TestPause/serial/Start (77.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-120159 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-120159 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m17.977904948s)
--- PASS: TestPause/serial/Start (77.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-074146
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-388098 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-388098 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (55.889025ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-388098] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-388098 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-388098 --driver=kvm2  --container-runtime=crio: (43.485871549s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-388098 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-893050 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-893050 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (92.674611ms)

                                                
                                                
-- stdout --
	* [false-893050] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:33:11.760759   50162 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:33:11.761039   50162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:33:11.761048   50162 out.go:304] Setting ErrFile to fd 2...
	I0520 11:33:11.761052   50162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:33:11.761246   50162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-3819/.minikube/bin
	I0520 11:33:11.761760   50162 out.go:298] Setting JSON to false
	I0520 11:33:11.762606   50162 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4535,"bootTime":1716200257,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0520 11:33:11.762657   50162 start.go:139] virtualization: kvm guest
	I0520 11:33:11.765049   50162 out.go:177] * [false-893050] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0520 11:33:11.766390   50162 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:33:11.766413   50162 notify.go:220] Checking for updates...
	I0520 11:33:11.767670   50162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:33:11.769193   50162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-3819/kubeconfig
	I0520 11:33:11.770461   50162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-3819/.minikube
	I0520 11:33:11.771846   50162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0520 11:33:11.772973   50162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:33:11.774543   50162 config.go:182] Loaded profile config "NoKubernetes-388098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:33:11.774656   50162 config.go:182] Loaded profile config "kubernetes-upgrade-854547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0520 11:33:11.774799   50162 config.go:182] Loaded profile config "pause-120159": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0520 11:33:11.774893   50162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:33:11.809460   50162 out.go:177] * Using the kvm2 driver based on user configuration
	I0520 11:33:11.810597   50162 start.go:297] selected driver: kvm2
	I0520 11:33:11.810633   50162 start.go:901] validating driver "kvm2" against <nil>
	I0520 11:33:11.810651   50162 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:33:11.812493   50162 out.go:177] 
	W0520 11:33:11.813701   50162 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0520 11:33:11.814950   50162 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-893050 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-893050

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-893050

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-893050

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-893050

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-893050

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-893050

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-893050

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-893050

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-893050

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-893050

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-893050

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-893050" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-893050" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 May 2024 11:32:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.50.7:8443
name: pause-120159
contexts:
- context:
cluster: pause-120159
extensions:
- extension:
last-update: Mon, 20 May 2024 11:32:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-120159
name: pause-120159
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-120159
user:
client-certificate: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/client.crt
client-key: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-893050

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-893050"

                                                
                                                
----------------------- debugLogs end: false-893050 [took: 2.396177918s] --------------------------------
helpers_test.go:175: Cleaning up "false-893050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-893050
--- PASS: TestNetworkPlugins/group/false (2.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-388098 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-388098 --no-kubernetes --driver=kvm2  --container-runtime=crio: (12.649879177s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-388098 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-388098 status -o json: exit status 2 (243.487853ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-388098","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-388098
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-388098: (1.000376381s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (24.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-388098 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-388098 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.61367315s)
--- PASS: TestNoKubernetes/serial/Start (24.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-388098 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-388098 "sudo systemctl is-active --quiet service kubelet": exit status 1 (194.934517ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-388098
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-388098: (1.371084833s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-388098 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-388098 --driver=kvm2  --container-runtime=crio: (42.889403845s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-388098 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-388098 "sudo systemctl is-active --quiet service kubelet": exit status 1 (186.056869ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-814599 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0520 11:37:19.193020   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-814599 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m16.938147533s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-814599 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [afac3131-a3e6-4bdd-9790-28b8647bbc2c] Pending
helpers_test.go:344: "busybox" [afac3131-a3e6-4bdd-9790-28b8647bbc2c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [afac3131-a3e6-4bdd-9790-28b8647bbc2c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004658327s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-814599 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-814599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-814599 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-274976 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-274976 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (53.93224686s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-668445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0520 11:39:54.309921   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-668445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m38.600551622s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-274976 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [493596c1-61a7-4cb2-9871-f7152e3d2d97] Pending
helpers_test.go:344: "busybox" [493596c1-61a7-4cb2-9871-f7152e3d2d97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [493596c1-61a7-4cb2-9871-f7152e3d2d97] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004193659s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-274976 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-274976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-274976 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (697.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-814599 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-814599 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (11m36.890560591s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-814599 -n no-preload-814599
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (697.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-668445 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6a6a1506-9e75-423a-adb9-274636a21dbc] Pending
helpers_test.go:344: "busybox" [6a6a1506-9e75-423a-adb9-274636a21dbc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6a6a1506-9e75-423a-adb9-274636a21dbc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004303625s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-668445 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-668445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-668445 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-210420 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-210420 --alsologtostderr -v=3: (3.286901612s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210420 -n old-k8s-version-210420: exit status 7 (62.964996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-210420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (502.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-274976 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-274976 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (8m22.15586227s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274976 -n embed-certs-274976
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (502.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (442.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-668445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0520 11:44:54.310382   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
E0520 11:47:19.191963   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 11:48:42.238589   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
E0520 11:49:54.309672   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-668445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (7m22.108903672s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (442.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (55.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-111176 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-111176 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (55.902694289s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (55.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (98.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0520 12:07:19.191646   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m38.702882999s)
--- PASS: TestNetworkPlugins/group/auto/Start (98.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-111176 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-111176 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.305225242s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-111176 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-111176 --alsologtostderr -v=3: (11.3483098s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-111176 -n newest-cni-111176
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-111176 -n newest-cni-111176: exit status 7 (63.802448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-111176 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-111176 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0520 12:08:14.788036   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
E0520 12:08:14.793320   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
E0520 12:08:14.803606   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
E0520 12:08:14.823901   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
E0520 12:08:14.864233   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
E0520 12:08:14.944586   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
E0520 12:08:15.105028   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
E0520 12:08:15.425490   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
E0520 12:08:16.066618   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
E0520 12:08:17.347062   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
E0520 12:08:19.907525   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
E0520 12:08:25.028230   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-111176 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (39.098379429s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-111176 -n newest-cni-111176
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-111176 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-111176 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-111176 -n newest-cni-111176
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-111176 -n newest-cni-111176: exit status 2 (240.268186ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-111176 -n newest-cni-111176
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-111176 -n newest-cni-111176: exit status 2 (244.005539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-111176 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-111176 -n newest-cni-111176
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-111176 -n newest-cni-111176
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0520 12:08:35.269067   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m6.425427657s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-893050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-893050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5stz9" [bcdfd480-e400-498e-aef1-79c1d2616c29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5stz9" [bcdfd480-e400-498e-aef1-79c1d2616c29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003310818s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-893050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m30.982362289s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (109.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0520 12:09:11.478410   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
E0520 12:09:11.483740   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
E0520 12:09:11.494115   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
E0520 12:09:11.514977   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
E0520 12:09:11.555253   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
E0520 12:09:11.635613   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
E0520 12:09:11.796064   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
E0520 12:09:12.116674   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
E0520 12:09:12.757205   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
E0520 12:09:14.038341   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
E0520 12:09:16.598889   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
E0520 12:09:21.719899   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m49.466826592s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (109.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-668445 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-668445 --alsologtostderr -v=1
E0520 12:09:31.960326   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-668445 --alsologtostderr -v=1: (1.098820142s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445: exit status 2 (257.977411ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445: exit status 2 (280.398392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-668445 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-668445 -n default-k8s-diff-port-668445
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.20s)
E0520 12:11:30.544965   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m25.637780011s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-r78cd" [12ef2b7e-4045-4ea5-939e-ade77cc16bce] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004825774s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-893050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-893050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tnvt6" [b677c5b2-c4f8-4efe-9113-860271499059] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tnvt6" [b677c5b2-c4f8-4efe-9113-860271499059] Running
E0520 12:09:52.440741   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
E0520 12:09:54.309516   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/addons-807933/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004408811s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-893050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (101.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0520 12:10:33.401888   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m41.313388808s)
--- PASS: TestNetworkPlugins/group/flannel/Start (101.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fmnkk" [cc2cff64-c197-4bba-a6ac-80b2396dd941] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006834576s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-893050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-893050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-44t4z" [111699b6-99e1-470f-84f8-d4f3224c3fe9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-44t4z" [111699b6-99e1-470f-84f8-d4f3224c3fe9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004387836s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-893050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-893050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-893050 replace --force -f testdata/netcat-deployment.yaml
E0520 12:10:58.630857   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/no-preload-814599/client.crt: no such file or directory
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-893050 replace --force -f testdata/netcat-deployment.yaml: (1.627542722s)
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gdnxp" [1b50e8fa-e82d-46a4-97e5-11933ab2e712] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gdnxp" [1b50e8fa-e82d-46a4-97e5-11933ab2e712] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005331713s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-893050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-893050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-b6x95" [0cd81381-5f67-42fb-bbe8-0f65d27589e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-b6x95" [0cd81381-5f67-42fb-bbe8-0f65d27589e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.008736607s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-893050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-893050 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m4.040921268s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-893050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wx5wm" [341e7b4d-9467-4d3b-b93a-88fc2c89d1d4] Running
E0520 12:11:55.322539   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/old-k8s-version-210420/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004139433s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-893050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-893050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tj854" [566bde5c-9916-4b90-8b55-e272b5ef5eb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0520 12:12:01.266970   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/default-k8s-diff-port-668445/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-tj854" [566bde5c-9916-4b90-8b55-e272b5ef5eb6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005104504s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-893050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-893050 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-893050 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kh5dm" [3eea2a05-69e1-4655-ae23-81ce5d1984d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kh5dm" [3eea2a05-69e1-4655-ae23-81ce5d1984d3] Running
E0520 12:12:19.191384   11162 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/functional-044317/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003980103s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-893050 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-893050 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (36/313)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.1/cached-images 0
15 TestDownloadOnly/v1.30.1/binaries 0
16 TestDownloadOnly/v1.30.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
256 TestStartStop/group/disable-driver-mounts 0.12
267 TestNetworkPlugins/group/kubenet 2.71
275 TestNetworkPlugins/group/cilium 2.85
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-793579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-793579
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-893050 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-893050

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-893050

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-893050

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-893050

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-893050

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-893050

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-893050

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-893050

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-893050

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-893050

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-893050

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-893050" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-893050" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 May 2024 11:32:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.50.7:8443
name: pause-120159
contexts:
- context:
cluster: pause-120159
extensions:
- extension:
last-update: Mon, 20 May 2024 11:32:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-120159
name: pause-120159
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-120159
user:
client-certificate: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/client.crt
client-key: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-893050

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-893050"

                                                
                                                
----------------------- debugLogs end: kubenet-893050 [took: 2.590239749s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-893050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-893050
--- SKIP: TestNetworkPlugins/group/kubenet (2.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-893050 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-893050" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18925-3819/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 May 2024 11:32:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.50.7:8443
name: pause-120159
contexts:
- context:
cluster: pause-120159
extensions:
- extension:
last-update: Mon, 20 May 2024 11:32:35 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-120159
name: pause-120159
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-120159
user:
client-certificate: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/client.crt
client-key: /home/jenkins/minikube-integration/18925-3819/.minikube/profiles/pause-120159/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-893050

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-893050" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-893050"

                                                
                                                
----------------------- debugLogs end: cilium-893050 [took: 2.722337618s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-893050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-893050
--- SKIP: TestNetworkPlugins/group/cilium (2.85s)

                                                
                                    
Copied to clipboard